Im not sure exactly what category to put this in.
I have tried to do the following with a file that is 7.7GB on my system Centos 5.5
time cp original copy
and
time cp copy copy2
The copy of the copy is about half the time of the copy of the original.
I thought maybe the OS was cacheing or something, so I went to another directory and copied a few small files and stuff, and went back to make the copy of the copy again, and it was still way faster.
Any ideas whats going on here? Is the OS caching the file or something?
What made me notice this problem is that I have some code that processes this file. I wanted to test it on two files, so I just made a copy. I then noticed that the original file takes the longest to process on. What kind of diagnostics can I run on this?
The OS doesn't cache the file so much as it caches the disk blocks it read.
There's a couple of ways to try and account for caching when running timing tests. You could try to flush the OS disk buffers by allocating a huge amount of memory (I usually run something like perl -e '"\0"x1024x1024x1024' to do this); free before and after should give you an idea of how much data the OS has cached (under the buffers and cached columns).
Or when you time your run, ignore the system time - that will be primarily I/O - and just watch the user time. Of course, different runs may be very well dealing with different amounts of data so you would expect there to be different amounts of I/O.
The most reliable way is to run the test several times and use the fastest time as the value to compare.
sync && echo 3 > /proc/sys/vm/drop_caches
time cp original copy
sync && echo 3 > /proc/sys/vm/drop_caches
time cp copy copy2
Related
I'm trying a more efficient way of reading file contents in Linux without using the "cat" command, especially for larger file contents, as in such cases cat just shoots up the memory and CPU on the server.
One thing that comes to my mind is using a grep -v "character-set-which-is-unlikely-in-the-file" filename
But using different character sets every time and hoping it would not appear in the file, wouldn't be efficient.
Any other thoughts ?
If you just want to read through the file, so it gets cached, the simplest way is perhaps this:
cat filename > /dev/null
Note that you don't need to show the data on screen to read it from disk. That command reads the file, and ignores the content by dumping it in /dev/null, but it still reads all the data.
If the CPU load goes up, that is probably a good thing, meaning that the computer is working hard, and will be finished sooner rather than later. If it crashes, though, there is something else wrong.
If you have some specific reason not to use the "cat" command, you can try "dd" instead, but it is more complicated to write and will not be faster:
dd if=filename of=/dev/null bs=1M
Addendum:
This inspired me to run some tests. On my particular computer both "cat" and "dd" took 24.27-24.31 seconds to read a large file on a mechanical disk when it wasn't already cached, and 0.39-0.40 seconds when it was cached. (Three tests of each case, with very little variability.)
Both these programs contain code to write the data, even if it is dumped to /dev/null, so one could expect that a program specifically written to just read would be slightly faster, but I got the same times when I tried that.
I have several environment variables in my ~/.bashrc that point to different directories. I am running a program that creates a new folder every time that it runs and puts a time stamp in the directory name. For example, baseline_2015_11_10_15_40_31-model-stride_1-type_1. Is there away of making a variable that can link to the last created directory?
cd $CURRENT_DIR
Your mileage may vary a lot depending on what exactly do you need to accomplish. However, it almost all cases I would advise against doing something that weird and unreliable like what's described below and revise your architecture to avoid hunting for directories.
Method 1
If your program creates a subdirectory inside current directory, and you always know that nothing else happens in that directory and you want a subdirectory with latest creation timestamp, then you can do something like:
your_complex_program_that_creates_dir
TARGET_DIR=$(ls -t1 --group-directories-first | head -n1)
cd "$TARGET_DIR"
Method 2
If a lot of stuff happens on the system, then you'll end up monitoring what your program does with the filesystem and reacting when it creates a directory. There are two ways to do that, using strace and inotify, both are relatively complex. Here's the way to do that with strace:
strace -o some_temp_file.strace your_complex_program_that_creates_dir
TARGET_DIR=$(sed -ne '/^mkdir(/ { s/^mkdir("\(.*\)", .*).*$/\1/; p }' some_temp_file.strace
cd "$TARGET_DIR"
This snippet runs your_complex_program_that_creates_dir under control of strace, which essentially logs every system call your program makes into a file. Afterwards, this file is analyzed to seek a line like
mkdir("target_dir", 0777) = 0
and extract value of "target_dir" into a variable. Note that:
if your program creates more than 1 directory (even for temporary purposes and deletes them afterwards, or whatever) — there's really no way to determine which of them to grab
running a program with strace is much slower that normal due to huge overhead of logging all the syscalls.
it's super non-portable — facilities like strace exist on most modern OS, but implementations will vary a lot
A solution with inotify works in the same way, but using different mechanism — i.e. it uses OS hook to log all the operations that process performs with file system and then react to it (remember created directory).
However, I repeat, I'd strongly suggest against using any of these solutions beyond research interest.
I need to search through a TB of raw hard disk data. I need to find a couple of things inside. I tried using sudo cat /dev/sdc | less but this fails because it puts everything into RAM that is read. I only have 8 GB of RAM and 8 in swap space so putting a whole TB of data into RAM will not work.
I was wondering if I could somehow make less forgot what it has read after the 1GB mark or maybe use another editor.
I accidentally repartitioned my drive and lost some important files. I tried some utilities but none of them worked so I tried this. I got a few of the files but I can't get the rest because the computer freezes and runs out of RAM.
I learned my lesson, I need to make more frequent backups. Any help is greatly appreciated.
The -B option to less is exactly what you ask for. It allows less to be forgetful. Combine with -b1048576 to allocate 1G (the -b unit is K)
Or do it the interactive way: run less normally, scroll down until the point where it starts to get a little laggy, then just type -B at the less prompt to activate the option (did you know you can set less options interactively?)
Just don't try to scroll backward very far or you'll be forgotten-content land, where weird things happen.
(Side note: I've done this kind of recovery before, and it's easier if you can find the filesystem structures (inode blocks etc.) that point to the data, rather than searching for the data in a big dump. Even if some of the inodes are gone, by first recovering everything you can from the surviving inodes you narrow down the range of unknown blocks where the other files might be.)
I have a Thecus N8900 NAS, which is a Linux based file server, providing files via NFS to six clients. For some reason that Thecus support has yet to explain, it runs a script that checks /proc/meminfo every 60 seconds and if the disk cache exceeds 50% of available RAM they do a "echo 3 > /proc/sys/vm/drop_caches" command to flush the cache.
Leaving aside the issue of whether that makes sense or not, the actual "echo 3 > /proc/sys/vm/drop_caches" command can take hours to complete, which seems way too long to me.
The big problem is that when this happens, the load on the machine spikes, as does the disk utilization, making all NFS traffic crawl until the command finally completes, at which point things are responsive again.
The NAS itself has 16 gigs of RAM, 7 drives in a raid6 configuration (plus a hot spare), no drive problems at all (according to S.M.A.R.T. tests).
So the question is: what would cause the drop_caches command to take so long?
The command itself should complete instantaneously. The consequences, i.e. everything needs to be cached again, can take a lot of time. It doesn't make sense: if you can remove it completely it would be a good idea. (Also this is off topic in StackOverflow)
Edit: does it executes also a sync before echo 3 > /proc/sys/vm/drop_caches, such in
sync; echo 3 > /proc/sys/vm/drop_caches? Because the sync operation, which flushes all writes to the disk, may take a bit to complete. Also, while also the sync have performance issue, it may have some sense, in case of sudden power failure the data has been written to the disk already so you are going to be safe.
I have 100'000 1kb files. And a program that reads them - it is really slow.
My best idea for improving performance is to put them on ramdisk.
But this is a fragile solution, every restart need to setup the ramdisk again.
(and file copying is slow as well)
My second best idea is to concatenate the files and work with that. But it is not trivial.
Is there a better solution?
Note: I need to avoid dependencies in the program, even Boost.
You can optimize by storing the files contiguous on disk.
On a disk with ample free room, the easiest way would be to read a tar archive instead.
Other than that, there is/used to be a debian package for 'readahead'.
You can use that tool to
profile a normal run of your software
edit the lsit of files accesssed (detected by readahead)
You can then call readahead with that file list (it will order the files in disk order so the throughput will be maximized and the seektimes minimized)
Unfortunately, it has been a while since I used these, so I hope you can google to the resepctive packages
This is what I seem to have found now:
sudo apt-get install readahead-fedora
Good luck
If your files are static, I agree just tar them up and then place that in a RAM disk. Probably be faster to read directly out of the TAR file, but you can test that.
edit:: instead of TAR, you could also try creating a squashfs volume.
If you don't want to do that, or still need more performance then:
put your data on an SSD.
start investigating some FS performance test, starting with EXT4, XFS, etc...