I develop a software that uses a set of big files.
I cannot download all them.
I need to reproduce timeout error that cannot be reproduce otherwise.
There are stage host. I mounted its remote folder with sshfs but I cannot launch local
server instance because it can change these files. It requires write permissions.
With "sshfs -o ro" it fails to start.
I want to know is it possible to say to save changes locally that could overlay actual bytes in remote files?
You should be able to use UnionFS or AUFS (or any other Union mount filesystem) to use these two folders together. You would have the readonly mount with sshfs and merge it with local folder that would be preferred. Reading would occur from the remote filesystem until write has been done.
Related
I'm trying to Spin Plex Media server in Docker. And I want to mass my media as a Volume, but encrypted. Flow:
1. Mount volume from external storage on underlaying host.
2. mount volume to docker container as volume
3. encrypt data with encfs inside the docker
4. access data in Plex
5. Enjoy your media
The issue is than:
mount shows:
encfs on /media type fuse.encfs (rw,nosuid,nodev,relatime,user_id=0,group_id=0,default_permissions)
Data is readable from unix level. BUT it is not readable for Plex (shows as emtpy folder only).
I susspect encFS or Plex itself not supporting Fuse mounts...
Any ideas? any flags for mounting? Any way to change mount type (can be other "proxy" container).
I haven't used encFS however, in case it helps or you're not aware, the default plex user (usually 'plex') must be an owner or in a group that is assigned to the media files. In addition, if plex is showing an empty folder, it may be simply that the folder does not have read, write AND execute set i.e. chmod 775 (the folder needs the execute bit in order to list contents which is why 664 won't work). I wrote a guide over on Tech-KnowHow for this yesterday, which outlines a few ways to get this done, if you need any help with it just leave me a comment and I'll see what I can do while it's all still fresh in my mind.
Good luck!
Direct link: https://www.tech-knowhow.com/2016/03/how-to-plex-permissions-linux/
I'm working on a project that needs to be tested on an embedded Linux system. After every little change, I have to scp all files to the device over a SSH connection. Can you suggest a more convenient way to deploy files on a remote target? For example some trick on make's install command:
make install INSTALL='scp 192.168.1.100:/'
or something.
if you can use scp, you can probably also use rsync, specifically rsync over ssh. Use of rsync has as advantage is that it builds a delta of source and destination files, and transfers only what is necessary. In case of transfer after changing very little this would be of considerable benefit. I'd probably invoke it if building completes without error, like make ... && upload (where upload could be a script covering the details of transfer)
Just for completeness, sshfs is often quite useful. You can mount a remote folder visible over ssh on to a folder on your local hard disk. Performance is not great, but certainly serviceable enough for a deploy step, and it's transparent to all tools.
Can I create and use an svn repository on an NTFS partition when working with svn in Linux? That is, repository on the NTFS partition and checkouts and commits to and from an EXT4 partition.
I realize that NTFS support in Linux is limited and does not support permissions and symbolic links for example. Would that, or any other limitations, cause any issues?
The reason I am asking is because I am thinking about either 1) moving my repository to my Dropbox folder (which resides on an NTFS partition) or 2) moving my repository to a memory stick (which could potentially be NTFS partitioned).
My use case is very simple. I am the only person using the repository. Currently my repository resides on EXT4 and I either access it from the same machine as the repository is located on, or from a second machine thorough svn+ssh://. However, if I went with one of the options above, the access strategy would obviously change.
I would be hesitant to do this because, as you stated, NTFS partitions don't support Unix style permissions.
The Subversion repository directory is usually owned and can only be written to by the user who runs whatever Subversion server process is running. For example, if you're using Apache httpd, and you're Apache user is called httpd, the user who owns the repository is httpd and this would be the only user with write permissions on the files and directories.
A NTFS partition on a Windows box does have permissions set correctly because the Subversion server process would use Windows permission settings. A Linux server will have problems.
Also NTFS partitions are case preserving and not case sensitive, I don't know how this would affect the Subversion server process running on a Linux box. Again, a Windows Subversion server process would be fine with this. A Linux server may have problems.
Unfortunately, I can't say for certain one way or another. I've never tried it, nor seen it done. However, there is a post on the Wandisco Forum that covers this very scenario. The user was able to get around his problems, but I would be hesitant to say that all is beer and candy from then on.
Please say you're not doing this, so you can share a file:// protocol Subversion repository among multiple users. This is a big, fat no-no. Instead, you should at least run the svnserve process, and have users accessing your repository via the svn:// protocol. It's very simple to setup svnserve -- even as a Windows service. The only problem may be that port 3620 (The Subversion server port) is being blocked by your firewall or router.
Dropbox multiboot ntfs folder sync.
In an earlier closed thread by vanadium people we're wanting solution to sync Dropbox on multiple boot systems in one ntfs directory. Vanadium had a good suggestion that I tweaked a little bit to solve.
You must install it in Windows or other system and setup Dropbox folder from Dropbox.
Reboot into Linux system. (I used Ubuntu 18)
Install Dropbox to Ext 4 partition.
Open file manager to Home folder and delete Dropbox directory. Leave this file manager open.
Open a new file manager to the main directory ntfs or other that other os Dropbox folder is in.
Hit ctr + h then drag the Dropbox folder to the directory you deleted it from. (This creates a symbol link shortcut to the Dropbox folder you want)
Now sync Dropbox in Linux.
If you want Dropbox to load at startup you must set the partition folder to auto mount on startup in terminal.
1 - Write down the UUID of the drive that you want to mount by executing the following command:
sudo blkid
2 - Then edit the fstab:
sudo gedit /etc/fstab
3 - Add at the end of the file fstab:
UUID=D638F77338F7514B /media/baraldi/win_www ntfs defaults 0 0
Be sure the UUID matches what you recorded in the first step
4 - Restart)
Or Use the "Disks" app.
Load the Disks app (In System) and select the disk with the filesystem you want to mount on startup.
Then select the filesystem on that disk and click on the gears (for configuration).
Select "Edit Mount Options" from the popup menu.
On the setup options, click to check the "Mount on Startup" box. (This will add the entry to fstab when you click on "OK").
Reboot, and your filesystem should be available.
I agree with other comments here regarding manually adding lines to fstab via CLI/text editor. If you take the time to look at your fstab file it will help you understand what changes have been made and, ultimately the CLI method will become faster for you.
I have a local Linux server that I'm using to backup two remote Windows 7 boxes over an IPsec VPN tunnel connection. I have the user's Documents folders shared on the remote PC's and have mounted those shares (CIFS) on my local Linux server.
I'm going to use a cron job to run rsync on my local Linux server to create backups of these folders and am currently considering the -avz args to accomplish this.
My question is this: does the -z arg do anything for me since the mount is to a remote machine? As I understand it, -z compresses the data before sending it which definitely makes sense if the job were being run from the remote PC but, it seems like I'm compressing data that's already been pulled through the network given my setup (which seems like it would increase the backup time by adding an unnecessary step).
What are your thoughts? Should I use -z given my setup?
Thanks!
It won't save you anything. To compress the file, rsync needs to read it's contents (in blocks) and then compress them. Since reading the blocks is going to happen over the wire, pre-compression, you save no bandwidth and gain a bit of overhead from the compression itself.
we have a network of several machines and we want to distribute a big directory (ca. 10 GB) to every box.
It is located on an nfs-server and is mounted on all machines, so first approach is to just use normal cp to copy the files from the mounted to a local directory. This is easy, but unfortunately there is no progress bar, because it is not intended to use it for network copies (or is it?).
Using scp is intended for copying across network, but it may encrypt everything and therefore be slow.
Should one be faster, and if so, which: cp on nfs-mount or scp?
You can always use rsync, it can show you the progress (with --progress option) and is more lightweight than scp.
You can enable compression manually with -z.