undoing interrupted 'mv' (move) command [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I hope this is the right place to ask this question.
While trying to move a big directory "mydirname" (abt900GB), in a remote linux server, from /abc/source to /xyz/target ; I used the following command in sourcedirectory,
mv mydirname /xyz/target/ &
However, after a while the process got interrupted and gave an error,
mv: cannot stat `mydirname/GS9/set04/trans/run.3/acc': Stale file handle
mv: cannot stat `mydirname/GS9/set04/trans/run.4/amc': Stale file handle
.
.
.
and many more such messages mentioning different subdirectories locations.
The problem is that, the process has moved about 300gb of data. However, there are many directories which are not fully moved. Similar, problem occurred with another transfer (about 500 GB) that was running at the same machine.
Also, I am no longer in the same working session. I have disconnected and reconnected to the remote server.
It would be great if you help with following queries.
Is it possible that some of the file are not fully-transferred (i have seen such cases in 'cp' command where if a process interrupts, it results in lesser size file at the destination.
How can I resume the process so that I do not loose any data. Will 'mv' command be enough? or is there any special command that can work in background.
Else, is there a command to undo the process and restore the 'mydirname' to original location 'source'.

Use "rsync" to complete a job like this:
rsync -av --delete mydirname/ /xyz/target
It will verify that all files are moved, of the proper length, correct timestamps and will delete any leftover garbage.
You can test first with a "dry run" to see what the damages are:
rsync -avn --delete mydirname/ /xyz/target
This goes through the whole rsync process but doesn't actually do anything. It's usually a good idea to run this test to check your command syntax and see if it's going to do what you think it should do.
The "rsync" command is actually more like a copy "cp" than a move "mv". It will leave the source files in place and you can delete them later when you are satisfied that everthing has transferred correctly.

Related

dd imaging a failing disk which drops connection [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I am trying to perform an image on a hard disk which is failing.
The issue I am encountering causes the program to fail as the disk will routinely drop during the image process and when it is re-recognised by the system it is under a different address (/dev/sdb is now /dev/sde).
I have tried imaging each partition independently but on a 500GB disk I am strugging to get past 100GB a session before the disk will drop (i think the head is going as it clicks).
My question is, if using dd is there a way to image the disk, breaking it down into say 50GB parts so that I can get the whole disk over a number of images and then consolodate.
Or better still, is there a way to force the disk to re-identify on the previous location?
I have found little information on this topic so any insight would be useful.
Thanks.
When the device is lost, your stream will be lost, too. You cannot recover it even if it gets the same device name assigned. However you might want to employ udev rules to get the same name back just for your convenience.
In dd, you can use four useful parameters:
bs=BYTES the size of a "block"
skip=N number of blocks to skip in input
seek=N number of blocks to skip in output
count=N number of blocks to be copied (we don't need it here)
Also, dd has the, albeit a bit hidden, feature of providing progress reports. You can either use "status=progress" or send a signal to the process. The latter is more complicated but it allows you to define the frequency of progress reports. For example, you can do this in another terminal:
for ((;;)); do sleep 1; kill -USR1 `pidof -s dd`; done
Putting all of this together, you can use bs=4M as a reasonable blocksize. Then you can run aforementioned command in a secondary terminal, then start dd, initially with
dd bs=4M seek=0 skip=0 if=/dev/… of=…
After it fails the first time, you use the last block number that was successfully copied by dd as parameters to seek and skip. You can be a bit conservative here (decrease the number a bit) to ensure you don't get any "holes" in your output.
Repeat until the whole disk is done. Good luck!

Copy linux partition on a usb stick [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 7 years ago.
Improve this question
I would like to know how to copy a Linux partition (example: /dev/sda1) on a USB stick, and then boot on the USB stick.
I tried to just copy it with the command cp but when I tried to boot on it, it booted on the partition I copied (/dev/sda1) and not the usb.
In short what I want to do is create a USB stick with my Linux partition on it that I can boot on with any computer.
Thank you.
cp is great for copying files, but you should consider it too high-level for copying partitions. When you copy a partition, you read from a device file and write to another device file, or normal file, or what ever. With cp, many file attributes might be changed: modification time, owner, permissions, etc. This isn't great for partition copies, e.g. files owned by root should still be owned by root, or ~/.ssh/config should still have permissions 600.
The program for this task is dd, which copies bit-by-bit. You specify an input file and an output file:
dd if=/dev/sda of=/dev/sdf bs=512
This copies contents of /dev/sda to /dev/sdf while reading 512 bytes at a time (bs=blocksize). After some time, it will finish and report some statstics. To get statistics during copying, you must send SIGUSR1 signal to the dd process.
Please beware that dd is a dangerous tool, if incorrectly used: For example, it won't ask you for permission to overwrite your 10000 picture vacation album. It simply does. Make sure to specify the correct device files!
You also have to take care that sizes of source and destination fit: destination needs to be at least the size as the source. If you have a 500GB hard disk it won't work to copy to a 4GB USB stick.
Copying whole hard disks also copies the boot loader. An issue with this might be, that the entries in boot loader configuration reference wrong disks. However, starting the boot loader should be no problem (provided architecture matches). If you use GRUB, you even get a command line, which you can use to boot the system manually.
Please change your bios setting so that the first boot device is USB.

Executable Deleting Itself on linux [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
Being a super user , I executed the following command on linux
rm rm
which removes itself. Because when process is in execution , its reference count
is not zero.Hence it cannot be deleted. So I am bemused,
how and why does it happen?
I tried the same with chown 0000 chown as well.
cp -r Dir1/ Dir2/
In above command also , what happens when i delete the source directory only when copying is in progress???
It is the same as for temporary files.
Recall that a usual way to create some temporary file is to open(2) a file (keeping its file descriptor), then unlink(2) (while still having an open file descriptor). Then the data of the file remains in the file system as long as the process is running and have not close(2)-d that file descriptor.
This is because files really are inodes -not file names in directories. (directories contain entries associating names to inodes).
The kernel manages the set of "used" (or "opened") inodes, and that set contains the inodes executed by processes (actually, the inodes involved in some address mapping like thru mmap(2) or execve(2))
So just after /bin/rm /bin/rm starts, the kernel has one reference to rm binary as the executable of the process.
When it processes the unlink syscall, it has temporarily two references (one being the process in execution, the other the path /bin/rm passed to unlink kernel implementation) and decreases it to one.
Of course you should avoid typing /bin/rm /bin/rm but then you usually have some standalone shell like sash to be able to repair your system.
On Windows, "rm rm" is probably not possible, because of the reference count you mentioned. On most *nix systems however, it is. "rm" and also "chmod" is loaded into memory and only then will execute whatever the commandline specified. Another example: edit a file in one window and while editing that file, remove it in another window. That too should be possible on most *nix systems, regardless of reference counts.
You cant delete a directory using rm until its empty..

Make a "copy" of linux system [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 12 years ago.
Improve this question
Let's assume I have a hard drive with some linux distribution on it. My task is to set up similar system (with similar distro, kernel version, software versions etc.) on the other hard drive. How can i do that if:
Case a: I'm allowed to use any software i want (include software like Virtualbox to make full image of the system)
Case b: I'm not allowed to use anything but standard linux utilities to retrieve all characteristics i need, and then install "fresh" system on other hard drive manually.
Thanks for reading. It's very hard to me to express what i mean, i hope you understood it.
One word: CloneZilla
It can clone the partitions, disks, copies the boot record. You can boot it up from CD or USB drive or even via network (PXE).
You could go with dd but it's slow because it copies everything, even the empty space on disk, and if your partitions are not the same size you can have various problems, so I do not recommend dd.
You could also boot the system from some live CD like Knoppix, mount the partitios and copy everything using cp -a. And run something like watch df in a second terminal to monitor the progress. But even then you need to mess with the bootloader after copy is done.
I used to use various manual ways to clone Linux systems in the past, until I discovered CloneZilla. Life is much easier since then.
Easiest way is to use dd from the command prompt.
dd if=/dev/sda of=/dev/sdb --bsize=8096
dd (the disk duplicator) is used for exactly this purpose. I would check the man page to ensure my blocksize argument is correct though. The other two arguments are if (in file) and of (out file). The of= hard drive should be the same size or larger than the if= hard drive.
You can create an exact copy of the system on the first disk with dd or cpio and a live cd.

rm not freeing diskspace [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 1 year ago.
The community reviewed whether to reopen this question 1 year ago and left it closed:
Original close reason(s) were not resolved
Improve this question
I've rm'ed a 2.5gb log file - but it doesn't seemed to have freed any space.
I did:
rm /opt/tomcat/logs/catalina.out
then this:
df -hT
and df reported my /opt mount still at 100% used.
Any suggestions?
Restart tomcat, if the file is in use and you remove it, the space becomes available when that process finishes.
As others suggested, the file probably is still opened by other processes. To find out by which ones, you can do
lsof /opt/tomcat/logs/catalina.out
which lists you the processes. Probably you will find tomcat in that list.
Your Problem:
Its possible that a running program is still holding on to the file.
Your Solution:
Per the other answers here, you can simply shutdown tomcat to stop it from holding on to the file.
If that is not an option, or if you simply want more details, check out this question: Find and remove large files that are open but have been deleted - it suggests some harsher ways to deal with it that may be more useful to your situation.
More Details:
The linux/unix filesystem considers "opened" files to be another name for them. rm removes the "name" from the file as seen in the directory tree. Until the handles are closed, the files still has more "names" and so the file still exists. The file system doesn't reap files until they are completely unnamed.
It might seem a little odd, but doing it this way allows for useful things like enabling symlinks. Symlinks can essentially be treated as an alternate name for the same file.
This is why it is important to always call your languages equivalent to close() on a file handle if you are done with it. This notifies the OS that the file is no longer being used. Although sometimes this cant be helped - which is likely the case with Tomcat. Refer to Bill Karwin's Answer to read why.
Depending on the file-system, this is usually implemented as a sort of reference count, so there may not be any real names involved. It can also get weird if things like stdin and stderr are redirected to a file or another bytestream (most commonly done with services).
This whole idea is closely related to the concept of 'inodes', so if you are the curious type, i'd recommend checking that out first.
Discussion
It doesn't work so well anymore, but you used to be able to update the entire OS, start up a new http-daemon using the new libraries, and finally close the old one when no more clients are being serviced with it (releasing the old handles) . http clients wouldn't even miss a beat.
Basicly, you can completely wipe out the kernel and all the libraries "from underneath" running programs. But since the "name" still exists for the older copies, the file still exists in memory/disk for that particular program. Then it would be a matter of restarting all the services etc. While this is an advanced usage scenario, it is a reason why some unix system have years of up-time on record.
Restarting Tomcat will release any hold Tomcat has on the file. However, to avoid restarting Tomcat (e.g. if this is a production environment and you don't want to bring the services down unncessarily), you can usually just overwrite the file:
cp /dev/null /opt/tomcat/logs/catalina.out
Or even shorter and more direct:
> /opt/tomcat/logs/catalina.out
I use these methods all the time to clear log files for currently running server processes in the course of troubleshooting or disk clearing. This leaves the inode alone but clears the actual file data, whereas trying to delete the file often either doesn't work or at the very least confuses the running process' log writer.
As FerranB and Paul Tomblin have noted on this thread, the file is in use and the disk space won't be freed until the file is closed.
The problem is that you can't signal the Catalina process to close catalina.out, because the file handle isn't under control of the java process. It was opened by shell I/O redirection in catalina.sh when you started up Tomcat. Only by terminating the Catalina process can that file handle be closed.
There are two solutions to prevent this in the future:
Don't allow output from Tomcat apps to go into catalina.out. Instead use the swallowOutput property, and configure log channels for output. Logs managed by log4j can be rotated without restarting the Catalina process.
Modify catalina.sh to pipe output to cronolog instead of simply redirecting to catalina.out. That way cronolog will rotate logs for you.
the best solution is using 'echo' ( as #ejoncas' suggestion ):
$ echo '' > huge_file.log
This operation is quite safe and fast(remove about 1G data per second), especially when you are operating on your production server.
Don't simply remove this file using 'rm' because firstly you have to stop the process writing it, otherwise the disk won't be freed.
refer to: http://siwei.me/blog/posts/how-to-deal-with-huge-log-file-in-production
UPDATED: the origin of my story
in 2013, when I was working for youku.com, on the Saturday, I found one core server was down, the reason is : disk is full ( with log files)
so I simplely rm log_file.log ( without stopping the web app proccess) but found: 1. no disk space was freed and: 2. the log file was actually not seen to me.
so I have to restart my web-server( an Rails app ) and the disk space was finally freed.
This is a quite important lesson to me. It told me that echo '' > log_file.log is the correct way to free disk space if you don't want to stop the running process which is writing log to this file.
If something still has it open, the file won't actually go away. You probably need to signal catalina somehow to close and re-open its log files.
If there is a second hard link to the file then it won't be deleted until that is removed as well.
Enter the command to check which deleted files has occupied memory
$ sudo lsof | grep deleted
It will show the deleted file that still holds memory.
Then kill the process with pid or name
$ sudo kill <pid>
$ df -h
check now you will have the same memory
If not type the command below to see which file is occupying memory
# cd /
# du --threshold=(SIZE)
mention any size it will show which files are occupying above the threshold size and delete the file
Is the rm journaled/scheduled? Try a 'sync' command for force the write.

Resources