Despite a lots of topics about this error, I'm still having trouble with setting up av SVN Server. Server is running on Scientific Linux 6 and repositories are supposed to be stored via NFS3 on a SUNOS Storage server.
I read that mounting with "nolocks" options would solve the problem but I don't want to do so as a lot of users are working at the same time on the server, I guess removing the locks would create new problems.
SVN is installed, working on local files, but when I try to create a repo on distant location, files are created but I get the error "database is locked" and cannot use the repo. I use the fsfs system which is supposed to work fine with NFS.
Would anyone have another option for me ?
OK I eventually set up a new share on the NFS server, accessible by my SVN server only, mounted there with "nolock". Then it works, but not really the point, I still don't know how to set that up without removing the locks.
An NFS client will normally use the NFS Lock Manager (NLM) to synchronize locking of certain files on the NFS server with other NFS client accessing/locking the same files. The nolock mount option tells the NFS client not to use the NFS Lock Manager but instead to manage the locks locally on the NFS client machine itself. This is useful if you only have 1 NFS client or several NFS clients where each client works on a different area of the exported file system so that there is no lock contention.
It looks like you have the following:
(A) SVN_Client ==> (B) SVN_Server/NFS_Client ==> (C) NFS_Server
Where: Server (B) is Scientific linux 6 providing SVN services to clients and mounting from Server (C), the SunOS Storage Server.
Assuming you have no other machine mounting from the NFS server and providing the same SVN services, the nolock option will work correctly as server(B) will do all the lock management locally. There is no need/requirement to lock centrally on the NFS server.
This is true for NFSv3 which you mentioned in your question.
Related
I have been getting this issue when i perform svn update for a shared directory(on remte server) from my local windows machine.
"Error running context: An existing connection was forcibly closed by the remote host."
The directory is very large with several folders and sub-folders. How should i fix this issue?
If you mean that you're sharing a Subversion working copy with other coworkers using a Windows share, that outcome is to be expected because the system is not designed with that scenario in mind. Subversion needs exclusive access to the .svn directory and a reasonably fast disk access that most LAN setups don't offer.
There's even an entry in the FAQ section of TortoiseSVN documentation:
Can I store a working copy on a network share?
This depends on the network share. But we really, really urge you to
not do this! Even if you're using a Windows server and use those
network shares, the fcntl() file locking is not fully reliable. And
for Samba based shares all bets are off. Which means you will get a
corrupted working copy and you then will lose data! Maybe not today,
maybe not tomorrow, but someday you will.
Whatever your use case is, your current toolchain cannot cope with it.
Can I create and use an svn repository on an NTFS partition when working with svn in Linux? That is, repository on the NTFS partition and checkouts and commits to and from an EXT4 partition.
I realize that NTFS support in Linux is limited and does not support permissions and symbolic links for example. Would that, or any other limitations, cause any issues?
The reason I am asking is because I am thinking about either 1) moving my repository to my Dropbox folder (which resides on an NTFS partition) or 2) moving my repository to a memory stick (which could potentially be NTFS partitioned).
My use case is very simple. I am the only person using the repository. Currently my repository resides on EXT4 and I either access it from the same machine as the repository is located on, or from a second machine thorough svn+ssh://. However, if I went with one of the options above, the access strategy would obviously change.
I would be hesitant to do this because, as you stated, NTFS partitions don't support Unix style permissions.
The Subversion repository directory is usually owned and can only be written to by the user who runs whatever Subversion server process is running. For example, if you're using Apache httpd, and you're Apache user is called httpd, the user who owns the repository is httpd and this would be the only user with write permissions on the files and directories.
A NTFS partition on a Windows box does have permissions set correctly because the Subversion server process would use Windows permission settings. A Linux server will have problems.
Also NTFS partitions are case preserving and not case sensitive, I don't know how this would affect the Subversion server process running on a Linux box. Again, a Windows Subversion server process would be fine with this. A Linux server may have problems.
Unfortunately, I can't say for certain one way or another. I've never tried it, nor seen it done. However, there is a post on the Wandisco Forum that covers this very scenario. The user was able to get around his problems, but I would be hesitant to say that all is beer and candy from then on.
Please say you're not doing this, so you can share a file:// protocol Subversion repository among multiple users. This is a big, fat no-no. Instead, you should at least run the svnserve process, and have users accessing your repository via the svn:// protocol. It's very simple to setup svnserve -- even as a Windows service. The only problem may be that port 3620 (The Subversion server port) is being blocked by your firewall or router.
Dropbox multiboot ntfs folder sync.
In an earlier closed thread by vanadium people we're wanting solution to sync Dropbox on multiple boot systems in one ntfs directory. Vanadium had a good suggestion that I tweaked a little bit to solve.
You must install it in Windows or other system and setup Dropbox folder from Dropbox.
Reboot into Linux system. (I used Ubuntu 18)
Install Dropbox to Ext 4 partition.
Open file manager to Home folder and delete Dropbox directory. Leave this file manager open.
Open a new file manager to the main directory ntfs or other that other os Dropbox folder is in.
Hit ctr + h then drag the Dropbox folder to the directory you deleted it from. (This creates a symbol link shortcut to the Dropbox folder you want)
Now sync Dropbox in Linux.
If you want Dropbox to load at startup you must set the partition folder to auto mount on startup in terminal.
1 - Write down the UUID of the drive that you want to mount by executing the following command:
sudo blkid
2 - Then edit the fstab:
sudo gedit /etc/fstab
3 - Add at the end of the file fstab:
UUID=D638F77338F7514B /media/baraldi/win_www ntfs defaults 0 0
Be sure the UUID matches what you recorded in the first step
4 - Restart)
Or Use the "Disks" app.
Load the Disks app (In System) and select the disk with the filesystem you want to mount on startup.
Then select the filesystem on that disk and click on the gears (for configuration).
Select "Edit Mount Options" from the popup menu.
On the setup options, click to check the "Mount on Startup" box. (This will add the entry to fstab when you click on "OK").
Reboot, and your filesystem should be available.
I agree with other comments here regarding manually adding lines to fstab via CLI/text editor. If you take the time to look at your fstab file it will help you understand what changes have been made and, ultimately the CLI method will become faster for you.
I have a local Linux server that I'm using to backup two remote Windows 7 boxes over an IPsec VPN tunnel connection. I have the user's Documents folders shared on the remote PC's and have mounted those shares (CIFS) on my local Linux server.
I'm going to use a cron job to run rsync on my local Linux server to create backups of these folders and am currently considering the -avz args to accomplish this.
My question is this: does the -z arg do anything for me since the mount is to a remote machine? As I understand it, -z compresses the data before sending it which definitely makes sense if the job were being run from the remote PC but, it seems like I'm compressing data that's already been pulled through the network given my setup (which seems like it would increase the backup time by adding an unnecessary step).
What are your thoughts? Should I use -z given my setup?
Thanks!
It won't save you anything. To compress the file, rsync needs to read it's contents (in blocks) and then compress them. Since reading the blocks is going to happen over the wire, pre-compression, you save no bandwidth and gain a bit of overhead from the compression itself.
Is there a way to boost svn performance when the working copy is running over NFS?
(*) It is required for it to be and the NFS mounted partition (/home).
I guess SVN client reads the whole tree looking for changes when commiting. I don't have an idea of what can make a checkout slow.
According to the Subversion FAQ:
Working copies can be stored on NFS (one common scenario is when your home directory is on a NFS server). On Linux NFS servers, due to the volume of renames used internally in Subversion when checking out files, some users have reported that 'subtree checking' should be disabled (it's enabled by default). Please see NFS Howto Server Guide and exports(5) for more information on how to disable subtree checking.
Checkout performance can be constrained by a few factors, but most likely in your case it's I/O to that NFS mount - unless you're saturating the network connection, or the server is undersized.
Use "nolock" option to mount. It is actually OS local lock, not NFS server side lock.
I got better performance from this option.
Checkout performance over NFS is abysmal, to the point that it becomes the major bottleneck, not the bandwidth to the Subversion server. rsvn by Bryce Denney and Wilson Snyder is a perl script that logs into the NFS server using ssh (assuming that is allowed) and runs the svn command remotely. In my tests it gives orders of magnitude faster performance. Excerpt from the man page:
NAME
rsvn - remote svn - run subversion commands on the file server if possible
SYNOPSIS
rsvn ANY_SVN_COMMAND
rsvn update
rsvn --test
DESCRIPTION
When possible, run the svn command on the file server where it does not have to wait for NFS. Otherwise run svn as usual. Some SVN commands will always run locally, either for "safety" or because there is no benefit of running on the file server (svn log).
The commands that will be sent to the file server by default are these (and their abbreviations):
add checkout cleanup diff merge resolved
revert status switch update
Why is commit not run remotely? Because it will either start an editor, which won't always work through noninteractive SSH, or you might use -m "FOO BAR" and the shell's quoting gets all screwed up. It would be good to figure out how to solve these problems, and add "commit" to the list.
I have many machines (20+) connected in a network. each machine accesses a central database, queries it, processes the information queried, and then writes the results to files on its local hard drive.
Following the processing, I'd like to be able to 'grab' all these files (from all the remote machines) back to the main machine for storage.
I thought of three possible ways to do so:
(1) rsync to each remote machine from the main machine, and 'ask' for the files
(2) rsync from every remote machine to the main machine, and 'send' the files
(3) create a NFS share on each remote machine, to which the main machine can access and read the files (no 'rsync' is needed in such a case)
Is one of the ways better than others? are there better ways I am not aware of?
All machines use Ubuntu 10.04LTS. Thanks in advance for any suggestions.
You could create one NFS share on the master machine and have each remote machine mount that. Seems like less work.
Performance-wise, it's practically the same. You are still sending files over a (relatively) slow network connection.
Now, I'd say which approach you take depends on where you want to handle errors or irregularities. If you want the responsibility to lie on your processing computers, use rsync back to the main one; or the other way round if you want the main one to work on assembling the data and assuring everything is in order.
As for the shared space approach, I would create a share on the main machine, and have the others write to it. They can start as soon as the processing finishes, ensure the file is transferred correctly, and then verify checksums or whatever.
I would prefer option (2) since you know when the processing is finished on the client machine. You could use the same SSH key on all client machines or collect the different keys in the authorized_keys file on the main machine. It's also more reliable if the main machine is unavailable for some reason, you can still sync the results later while in the NFS setup the clients are blocked.