I have a zip file thats titled like so file1.zip,file2.zip,file3.zip,etc...
How do I go about extracting these files together correctly? They should produce one output file.
Thanks for the help!
First, rename them to "file.zip", "file.z01", "file.z02", etc. as Info-ZIP expects them to be named, and then unzip the first file. Info-ZIP will iterate through the split files as expected.
I found a way. I had to mount the remote machines user home folder on my Ubuntu desktop pc and use File Roller Archive Manager, which is just listed as Archive Manger in Ubuntu 18.
Mount remote home folder on local machine...
Install sshfs
sudo apt install sshfs
Make a directory for the mount. Replace remote with whatever folder name you want
mkdir remote
Mount the remote file system locally replacing linuxusername with the user account you want to use to login and xxx.* with its IP address or hostname.
sudo sshfs -o allow_other linuxusername#xxx.xxx.xxx.xxx:/ remote
Now in the mounted "remote" folder you can see the contents of the whole linux filesystem and navigate them in a File Manager just like your local file system, limited by user privileges of course where you can only write to the home folder of the remote user account.
Using Archive Manager I openened up the .zip file of the spanned set (not the .z01, .z02 etc files) and extracted inside the "remote" folder. I saw no indication of extraction progress, the bar stayed at 0% until it was complete. Other X Windows based archive applications might work.
This is slow, about 3-5 megabytes per second on my LAN. I noticed Archive Manager use 7z to extract but do not know how as 7z is not supposed to support spanned sets.
Also if your ssh server is dropbear instead of openssl's sshd it will be unbearably slow for large files. I had to extract a 160gb archive and the source filesystem was fat32 so was not able to combine the spanned set into one zip file as it has a 4gb file size limit.
Related
I have a few Azure SMB File Shares mounted on a linux VM.
In one of those file shares I have two folders, one called download and another called loaded.
Files get dropped in download, they get processed and moved into loaded. But sometimes we have to move the files from loaded to download again from our laptops (running Windows). And when we do this, the files can't move back to loaded.
Essentially:
I mount file-share
I run mv /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt
I drag and drop file.txt from loaded to download from my laptop
Up to here everything works. But when I try to run mv /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt again, it returns:
mv: /mnt/file-share/download/file.txt /mnt/file-share/loaded/file.txt are the same file
If I now umount and mount again file-share it works. So this makes me thing that it's a caching issue.
So I tried mounting with cache=none but it still does the same thing.
Any sugestions?
Thank you!
Using the noserverino option while mounting the share fixed the issue.
I am attempting to mount a file from my Windows (Host) to my Linux (Container). When I mount a single file with a standard extension, everything seems to work fine. However, when I attempt to mount a single file that is a dot-file, it does not work.
//This does not work
type=bind,source=${env:USERPROFILE}\\.sample,target=/home/.sample,consistency=cached
// This does work
type=bind,source=${env:USERPROFILE}\\sample.txt,target=/home/sample.txt,consistency=cached
I'm not sure how to specify that the file is a dot file. I did notice that if the file did not exist, a folder named .sample was created on my Windows (Host) machine, but that same folder was not created on Linux (Container).
Are you sure they aren't there? Linux treats dotfiles as hidden files, so they aren't visible by just doing an ls command.
You can use ls -A which should show you the hidden dotfiles.
I am attempting to mount a remote directory located on my web server to a directory in my xUbuntu installedation hosted in a VirtualBox.
I'm using the following command syntax:
sshfs root#*.*.*.*:/var/www Desktop/RemoteMount
Using the file manager, I navigate to the Desktop/RemoteMount directory but find it entirely blank. The SSHFS command above executed with no indication of an error.
Completely by chance, I use the terminal to long list the contents of the Desktop/RemoteMount directory and it shows all the data I was expecting to see in the file manager.
Can anyone tell me why the file manager does not show my remotely mounted data and how I might fix it?
Thanks.
you are missing local mountpoint.
sshfs -o idmap=user mika#192.168.1.2:/home/mika/remotepoint /home/mika/localmountpoint.
And You need to have localmount folder exist.
thanks Mika
When I try to save a file to disc within a project directory, I get this error:
java.io.IOException: W:\\[projectname]\\.idea not found
Some research tells me, the (network) location is not writable.
I'm trying to write this file from phpstorm in windows 8.
The drive (W:) is a network drive to a linux machine.
The directory I try to write to is chowned to the same user and group as I connect with in windows.
This is a result of ls -alh:
drwxrwxrwx 2 correct-user correct-user
On Linux and other Unix-like operating systems files starting with a . are considered 'hidden files' by default. As such, when the Windows-based program creates it, it suddenly doesn't see it anymore right after since it's hidden, even though the creation was successful. You can fix this in your Samba config by adding the following line to the share configuration:
hide dot files = no
In my samba settings I added a veto files parameter. Removing this parameter allows me to write dot files again.
Samba describes this setting as follows:
This is a list of files and directories that are neither visible nor accessible
I'd like to create a directory that links to external site via ssh.
So that when I cd to /var/remote/dev01 it will actually cd to a folder on a remote site.
So I'm staying on my current terminal and can copy files from any other dir to this /var/remote/dev01 dir and it will copy the files over to the remote host.
Is this even doable?
Yes, this is possible. Take a look at SSHFS. It lets you mount a remote filesystem over ssh, and treat it as a local mountpoint, for standard filesystem operations.
Here's a nice walkthrough to get you started.