Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
My intention is to keep two directories (say dir1 & dir2) synced. So that whenever there is a change in the content of the dir1 (can be addition or deletion of new file or directory or modifying the content of a file in the directory) then the change should be propagated to dir2 and vice versa.
The naive way I can think of doing this is to run rsync periodically via cron in both the machines. But there are fallacies in this approach:-
It might happen that the previous rsysnc is not complete and the cron executed rsync once more while the previous rysnc is still going on.
A new file is added in dir1 and before rsync ran on dir2 rsync on dir1 ran then newly added file might be deleted from dir1 since it is not present in dir2
Also this is not real time.
Can some suggest some better way of doing this?
It strongly depends on the purpose. 'Realtime' is probably not the term you are looking for.
Take a look at https://www.gluster.org/ (Replicated Mode) for a synchronous replication via network.
Related
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
I tried to copy 17171 files, but whatever parameters I use, it always copied 17160 which 11 lost.
But same command for another directory, copied accurately. (16545 files).
I also tried use cp, but also lost 11 files.
When I check the folder with finder, it should be 17171 files there...
rsync -arvz src dst
cp src dst
Above is the command I've tried
There can be a number of issues at play:
One of the more common issues is that the target filename is illegal on the remote system, for example trying to copy a file with a colon : in the filename from UNIX to Windows.
There may also be permission issues reading the files that are not copied, check the permissions here.
Finally, you could try zipping (or taring) the bunch of files into a single file, and transfer just that instead. Typically you'll see the problem when unpacking that file on the remote system.
EDIT: Another thought - are the files that did not copy really-really large, too large to store remotely?
If you rsync with the -P option, it should only re-transfer files that were not copied. It will also print progress, that should give you a better idea of what's not copying.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 4 years ago.
Improve this question
Well, I know you have to use -r with cp and rm when dealing with directories. It makes this job recursively (meaning it coping and removing all starting with things inside).
But why you dont do "mv -r" when moving / renaming directories?
Directories are just collections of pointers to locations of files on the filesystem. When you move a directory you are updating the file pointers of the new and old parents to contain/remove the one you moved. Thus, child file pointers inside do not require recursive action as none of the pointer locations have actually changed for them.
EDIT: I've just found a much more detailed answer on Unix & Linux StackExchange that will help explain this further.
https://unix.stackexchange.com/questions/46066/why-unix-mv-program-doesnt-need-r-recursive-option-for-directories-but-cp-do
For every move, new location is needed.
If one wants to move all files under directory alongwith the directory, just move the directory which is recursive.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
Today,I encounter a very tough problem which cost me nearly 6 hours.
When I remove a file called ha_wan.conf using rm -rf ha_wan.conf command under /etc directory,Success.When I use ls -al command to see the result,The file disappear.
But when I reboot the linux system,same file named ha_wan.conf come back,located under /etc/ directory.
I tried to delete it many many times,It is the same result.
What should I do,I want to permanently remove that file.Thanks.
There's no magic. You removed the file. If you still see it after a reboot, it means one of two things:
(very likely) Some service recreates the files on boot, or periodically. You can probably use standard system tools to find out which package contained that file. (for example dpkg -S ha_wan.conf in debian-like systems)
(unlikely) You're running some interesting system which uses a temporary filesystem in /etc. If you're using a standard desktop distribution, that's improbable. But if it's some kind of router / special device, then it could happen.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
The history feature is great for remembering commands. Is there a feature that remembers recent directories?
I'd like to be able to search through a history of directories - it'd be even better if it was possible to bookmark and name them, as you can do in a browser.
You can do cd -1 to get back to the previous directory, cd -2 to get to the former etc.. You can also refer to them using ~1, like cp ~1/README.md ~2/
For a more advanced use, you can use the dirs builtin. You can also use pushd and popd to stack up directories and get back to them later on, pretty useful in scripts.
cf the directory stack
Zsh has the same facility, the dirstack. And with zsh, you can have more fun with directory bookmarks,
Finally, there's even a crazy guy who implemented a GUI for listing the dirstack. Not sure how useful that can be, but it's definitely crazy enough to be referred :-)
HTH
To save directories and keep a historial, you can try pushd and popd from bash
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
If I run find (Ubuntu, specifically), can I expect it to give me the same order of results every time? (Assuming, of course, that the actual files haven't changed.)
In other words, if I run
$ find foo
and it gives me
bar.txt
foo.txt
can I expect that it will never give me
foo.txt
bar.txt
?
The answer is "probably" but you shouldn't rely on it because any number of things can affect it.
What order do you want the files in? Decide on that and then use a find command (perhaps piped into sort) which reproducibly gets the result you need.
The order of the files is determined by the fine details of the filesystem format and the filesystem driver. You can't rely on it. Depending on the filesystem and operating system, here are things that might change the order:
A file is created or removed in a traversed directory (even if none of the listed files changed).
The files are moved around (e.g. transfered to a different filesystem or restored from backup).
A defragmenter or filesystem check ran and decided to move things around.
If you want a reproducible order, sort the results. find … | sort will do nicely if none of the file names contain newlines.