Sorry if this has been answered - I tried to search, but didn't find anyone with quite the same issue..
I'm trying to basically move all files from one drive (mountpoint) to another.. I initially used
mv /mnt/old1/* /mnt/disk1
This SEEMS to have been working, but I had a power failure in the middle of it, and when I re-issue the command, it seems to be having issues because the directory structure in the destination already exists, so it's not moving the source files anymore.
Basically, at this point, I'm just trying to merge two directory structures into one. I guess I could cp the structure, but I would really have no way to know if a file was skipped as I do with mv, since if it's still in the source drive, I can assume it wasn't moved..
Is there a better way to do this? I've never used rsync, but from what I'm reading, perhaps this is a better option?
Any help would be greatly appreciated - I've got millions of files (18+tb) to move and I don't want to inadvertently miss something..
Thanks!
Steve
I just tried the following, and it works.
mv -ui /old/* /new/
-u for update mode
-i for query if exist (just for double check, maybe useless)
I do not know whether slash "/" after "/new" matters, and after that, files in /old/ are those not moved.
Hope this can help :)
Related
I believe the question is self explanatory so I am gonna make more efficient use of the body section by sharing with you why I asked the question in the first place to get a better solution than the one I am trying to achieve and get two for one.
Basically I am trying to sync two local directories bi-directionally that respects a kind of .gitignore logic i.e. they are gonna ignore particular files and directories. Better yet, I would love something along the line of whitelisting!
I am familiar with tools like rsync and unison that get the syncing part done but not the ignoring/whitelisting.
You can get the original file name and delete it when deleting the symlink. For example:
rm symlink_name $(readlink -f symlink_name)
But remember that if there are other symlinks to the same file, then they'll be dangling.
I am working with environment value, $PATH. And I found that $PATH includes /snap/bin directory which does not exist. What does the path work? Can I remove it from $PATH or should I leave it?
Please give me your suggestion. Thank you very much?
It is a new-new Canonical thing to bundle and distribute applications.
See for example this developer link by Canonical.
Personally, I also find it somewhat odd that they went into the top-level via /snap but Oh well.
I may yet come to use it one. So far plain docker serves me well, besides of course building .deb package the old-fashioned way.
As for removing the PATH entry: it only saves you a few bytes, plus nanoseconds in lookups and may break a future deployment involving snaps. Your box, your call. I left mine.
I am planning on filing a bug on coreutils for this, as this behavior is unexpected, and there isn't any practical use for it in the real world... Although it did make me chuckle at first, as I never even knew one could create files with wildcard in their filename. How practical is a filename with a wildcard in it? Who even uses such a feature?
I recently ran a bash command similar to this:
ln -s ../../avatars/* ./
Unfortunately, I did not add the correct amount of "../", so rather than providing me with an informative error, it merely creates a link to a "*" file which does not exist. I would expect this to do that:
ln -s "../../avatars/*" ./
As this is the proper way to address such a filename.
Before a submit a bug on coreutils, I would like the opinion of others. Is there any practical use for this behavior, or should ln provide a meaningful error message?
And yes, I know one can just link to the entire directory, rather than each file within, but I do not wish newly created files to be replicated to the old location. There are only a few files in there that are being linked right now.
Some might even say that using a wildcard in symlinking is bad practice. However, I know the contents of the directory exactly, and this is much quicker than manually doing each file manually.
This isn't a bug.
In the shell, if you use a wildcard pattern that doesn't match anything, then the pattern isn't substituted. For example, if you do this:
echo *.c
If you have no .c files in the current directory, it will just print "*.c". If there are .c files in the current directory, then *.c will be replaced with that list.
For many commands, if you specify files that don't exist it is an error, and you get a message that seems to make sense, like "cannot access *.c". But for ln -s, since it is a symbolic link, the actual file doesn't have to exist, and it goes ahead and makes the link.
I am trying to write a script or a piece of code to archive files, but I do not want to archive anything that is currently open. I need to find a way to determine what files in a directory are open. I want to use either Perl or a shell script, but can try use other languages if needed. It will be in a Linux environment and I do not have the option to use lsof. I have also had inconsistant results with fuser. Thanks for any help.
I am trying to take log files in a directory and move them to another directory. If the files are open however, I do not want to do anything with them.
You are approaching the problem incorrectly. You wish to keep files from being modified underneath you while you are reading, and cannot do that without operating system support. The best that you can hope for in a multi-user system is to keep your archive metadata consistent.
For example, if you are creating the archive directory, make sure that the number of bytes stored in the archive matches the directory. You can checksum the file contents before and after reading the filesystem and compare that with what you wrote to the archive and perhaps flag it as "inconsistent".
What are you trying to accomplish?
Added in response to comment:
Look at logrotate to steal ideas about how to handle this consistently just have it do the work for you. If you are concerned that rename of files will make processes that are currently writing them will break things, take a look at man 2 rename:
rename() renames a file, moving it
between directories if required. Any
other hard links to the file (as
created using link(2)) are unaffected.
Open file descriptors for oldpath are
also unaffected.
If newpath already exists it will be atomically replaced (subject
to a few conditions; see ERRORS
below), so that there is no point at
which another process attempting to
access newpath will find it missing.
Try ls -l /proc/*/fd/* as root.
msw has answered the question correctly but if you want to file the list of open processes, the lsof command will give it to you.
I have a small problem with the SHFILEOPSTRUCT structure. I try to delete files in a temp directory. If there are some files in use the structure doesn't delete any file.
But I will delete all files, which are not in use without any display of a dialog.
How could I fix this?
As flag I use at the moment FOF_NOCONFIRMATION.
Edit
Oh I'm wrong. I use FOF_NOCONFIRMATION and FOF_NO_UI as flag and nothing happens. The structure returns 32.
If I just use FOF_ NOCONFIRMATION, a dialog box pop up and I could skip all files which are in use. All other files will be deleted.
If SHFILEOPSTRUCT can’t skip, how could I handle that problem?
I don't think it has that capability -- at least as far as I can tell, you're pretty much stuck re-implementing the same general capability if you want to continue without user intervention in spite of any problems that might arise.