I'm working on a remote machine and while installing a software, I've encountered the "clock skew detected” warning, and whole things fails. I've used the "find . -exec touch {} \;" to update the timestamp of files but it still fails, and more wired, the time stamp is still ahead of real time. Is there any method to let make ignore timestamp?
You don't specify exactly what "working on a remote machine" means, or how you're sharing files between the local and remote systems, but I'll assume that you're using NFS or some other remote partition mounting facility. In that case nothing you can do on your local system will help. You have to synchronize the clocks on the local and remote systems.
The timestamps applied to modified files in an NFS share are controlled by the NFS server, not by your local system. So when your local system modifies a file, the modification time is the server's current time not your local system's current time.
If the two systems' clocks are not synchronized, then tools like make which work based on file modification times cannot work properly.
Related
I met a strange problem on one cluster with 10 nodes.
On any node, any file operation makes the access/modification/change time of that file in the future that is 1min52s after the current system time obtained from date. That makes all make command cannot work correctly.
Following command are tested: touch X, echo 123456 > X, using utimes(X,NULL) and utime(X,NULL) with a C program. All of them can reproduce this problem.
Is there anyway to solve the problem? Thanks.
The usual way to address this is to synchronize the clocks on all of the machines to a common time reference using ntp (usually to a reliable time server). The NTP FAQ and HOWTO is a good place to start.
For most Linux servers, just installing the ntp package takes you halfway. You may need to customize the configuration file (usually /etc/ntp.conf), as well as enable the service for ntpd (the NTP daemon).
First of all, this is the first time I'm posting a question on StackOverflow, so please don't kill me if I've done anything wrong.
There goes my issue:
We have few dedicated servers with a well known French provider. With one of those servers ewe have recently acquired a 5.000GB backup space which can be mounted via NFS, and that's what we've done.
The issue comes when backing up big files. Every night we back up several VM's running on that host and we know from fact that the backups are not being properly done (the file size differs a lot from one day to the other plus we've checked the content of the backup and there's stuff missing).
So, it seems like the mount point is not stable and the backups are not being properly done. Seems like there are micro network cuts and therefore the hypervisor finishes the current backup and starts with the next one.
This is how it's mounted right now:
xxx.xxx.xxx:/export/ftpbackup/xxx.ip-11-22-33.eu/ /NFS nfs auto,timeo=5,retrans=5,actimeo=10,retry=5,bg,soft,intr,nolock,rw,_netdev,mountproto=tcp 0 0
Any advise? Is there any parameter you would change?
We need to be sure that the NFS mount point is correctly working in order to have proper backups.
Thank you so much
By specifying "soft" as an option, you're saying that it's OK for the mount to be unreliable -- for the kernel to return an I/O error instead of running the I/O to completion when things are taking too long. Using a hard mount, without the "soft" option instructs the kernel to avoid returning I/O errors for timeouts.
This will fix your corrupted backups, but... your backup process will hang hard until I/O's complete. An alternative is to use much longer timeout values.
You're using TCP for the mount protocol, but not for NFS itself. If your server supports it, consider adding "tcp" to the options line.
Is there a way to boost svn performance when the working copy is running over NFS?
(*) It is required for it to be and the NFS mounted partition (/home).
I guess SVN client reads the whole tree looking for changes when commiting. I don't have an idea of what can make a checkout slow.
According to the Subversion FAQ:
Working copies can be stored on NFS (one common scenario is when your home directory is on a NFS server). On Linux NFS servers, due to the volume of renames used internally in Subversion when checking out files, some users have reported that 'subtree checking' should be disabled (it's enabled by default). Please see NFS Howto Server Guide and exports(5) for more information on how to disable subtree checking.
Checkout performance can be constrained by a few factors, but most likely in your case it's I/O to that NFS mount - unless you're saturating the network connection, or the server is undersized.
Use "nolock" option to mount. It is actually OS local lock, not NFS server side lock.
I got better performance from this option.
Checkout performance over NFS is abysmal, to the point that it becomes the major bottleneck, not the bandwidth to the Subversion server. rsvn by Bryce Denney and Wilson Snyder is a perl script that logs into the NFS server using ssh (assuming that is allowed) and runs the svn command remotely. In my tests it gives orders of magnitude faster performance. Excerpt from the man page:
NAME
rsvn - remote svn - run subversion commands on the file server if possible
SYNOPSIS
rsvn ANY_SVN_COMMAND
rsvn update
rsvn --test
DESCRIPTION
When possible, run the svn command on the file server where it does not have to wait for NFS. Otherwise run svn as usual. Some SVN commands will always run locally, either for "safety" or because there is no benefit of running on the file server (svn log).
The commands that will be sent to the file server by default are these (and their abbreviations):
add checkout cleanup diff merge resolved
revert status switch update
Why is commit not run remotely? Because it will either start an editor, which won't always work through noninteractive SSH, or you might use -m "FOO BAR" and the shell's quoting gets all screwed up. It would be good to figure out how to solve these problems, and add "commit" to the list.
We have a strange problem here at work that I've been unable to figure out. We all use MacBooks with Snow Leopard on our desktops and we have a handful of Linux servers we also use remotely. Some of my team members put git repositories on an NFS filesystem that's shared between both the Mac's and the Linux servers so they don't have to think about sharing code between repositories in their personal workflow.
This is where the strange starts, on the OSX machines git will randomly show some files out of date in status when you try to merge or switch branches etc. If you run git status no files are shown out of date. gitk will show the files as modified but not committed in the same way status normally does. If you reset --hard those files you can sometimes change branches before this reoccurs but mostly not. If you log into one of the Linux machines and view the same repository everything works perfectly. The files are not marked as changed and you can do whatever you like.
I've eliminated Line ending differences and file mode differences as the culprit but I'm not sure what else to try. Is there some OSX specific NFS interaction that we have to work around somehow?
Maybe unsynchronized time between the servers and workstations makes the modification times of the files unreliable. Does setting of core.trustctime help? (it is true by default).
There is an even heavier setting: core.ignoreStat to ignore complete stat(2) information in the change detection code.
I have many machines (20+) connected in a network. each machine accesses a central database, queries it, processes the information queried, and then writes the results to files on its local hard drive.
Following the processing, I'd like to be able to 'grab' all these files (from all the remote machines) back to the main machine for storage.
I thought of three possible ways to do so:
(1) rsync to each remote machine from the main machine, and 'ask' for the files
(2) rsync from every remote machine to the main machine, and 'send' the files
(3) create a NFS share on each remote machine, to which the main machine can access and read the files (no 'rsync' is needed in such a case)
Is one of the ways better than others? are there better ways I am not aware of?
All machines use Ubuntu 10.04LTS. Thanks in advance for any suggestions.
You could create one NFS share on the master machine and have each remote machine mount that. Seems like less work.
Performance-wise, it's practically the same. You are still sending files over a (relatively) slow network connection.
Now, I'd say which approach you take depends on where you want to handle errors or irregularities. If you want the responsibility to lie on your processing computers, use rsync back to the main one; or the other way round if you want the main one to work on assembling the data and assuring everything is in order.
As for the shared space approach, I would create a share on the main machine, and have the others write to it. They can start as soon as the processing finishes, ensure the file is transferred correctly, and then verify checksums or whatever.
I would prefer option (2) since you know when the processing is finished on the client machine. You could use the same SSH key on all client machines or collect the different keys in the authorized_keys file on the main machine. It's also more reliable if the main machine is unavailable for some reason, you can still sync the results later while in the NFS setup the clients are blocked.