Why it delay that operate local file on a mounted directory? - io

I mount a local directory on remote host directory.
sshfs <remote_ip>:<remote_host_dir> <local_dir> -p <port>
If I perform a operation from the remote directory, it delay reasonably.
$ cd ~/mounted_directory/ && time ls
.....
ls 0.01s user 0.01s system 9% cpu 0.112 total
But if I perform a operation on local directory from a mounted directory, it delay strangely.
$ cd ~/mounted_directory
$ time ls .
....
ls . 0.00s user 0.01s system 9% cpu 0.110 total
$ ls ~ # io operaton on local file directory
....
ls ~ 0.00s user 0.01s system 10% cpu 0.133 total # delay too
$ cd ~ && time ls ~ # operation from local directory
.....
ls . 0.01s user 0.00s system 93% cpu 0.009 total # quickly resonable.
I consider that it should be as fast as like operation on local directory when I perform on the remote directory. Why it delay ??

Related

how to use cgroup v2 memory controller

i want to use cgroup v2 to control memory
firstly:
`
cd /sys/fs/cgroup
mkdir test
cat cgroup.controllers
cpuset cpu io memory pids
echo 1M > memory.high
then open a new terminal and
stress -m 1 --vm-bytes 200M --vm-keep #here i get pid
then
echo pid > cgroup.procs
i use cat cgroup.procs find pid exists in the file, and cat memoey.hgh 104857600,but memory.current is 0
i think these will make sense,but actually not,is some step wrong? and what shoud i do?

Hardr disk space deacreasing by 1gig everyday

I have installed meteor, nodejs and MongoDB on ubuntu 14.04. Everyday i find my hard disk space decreasing by almost 1 gig. What may cause that. Initially i had 16 gig and it was full, then i have to increase it by 24gig to make it 40gig. now in 3 days after the upgrade that hard disk is now 21 gig. The server is an instance EC2 on Amazon
A easy way to find out where all your space is being use is:
sudo du -h -d 1 <location>
Example:
sudo df -h -d 1 /
Output:
759M /usr
15M /home
6.9M /run
6.4M /bin
16K /lost+found
4.0K /mnt
1.5G /
This will get you output of your HDD space consumptio
n from your root directory,
just follow the output and run command again by changing the directory see which directory has taken up all the space.
UPDATE:
As you said docker is taking up space on your server see this

disk usage issue with rsync and --link-dest

I have disk usage problem with rsync and --link-dest
Incremental backup is taking full disk space:
#localhost media]$ ls
orig
----------------------------------------------------
localhost media]$ du -sh .
25M .
----------------------------------------------------
localhost media]$ rsync -avh orig/ full
----------------------------------------------------
#localhost media]$ du -sh .
49M .
----------------------------------------------------
localhost media]$ echo 1111 > orig/foo111
----------------------------------------------------
localhost media]$ rsync -avh --link-dest=full orig/ orig_1
----------------------------------------------------
localhost media]$ ls orig_1/foo111
orig_1/foo111
_____________________________________________________
localhost media]$ ls full/foo111
ls: cannot access full/foo111: No such file or directory
Everything looks good so far. The latest change is reflected in orig_1
But the directories aren't hard linked and they're all in full size.
-----------------------------------------------------
localhost media]$ du -sh .
74M .
---------------------------------------------
localhost media]$ du -sh orig_1/
25M orig_1/
--------------------------------------------
localhost media]$ du -sh orig
25M orig
---------------------------------------------
localhost media]$ du -sh full
25M full
Am I suppose to have the orig_1 size as 0? And stat command shows no hard links. What am I doing wrong?
When you ran rsync -avh --link-dest=full orig/ orig_1, you ignored this error message (it's more obvious if you remove -v):
--link-dest arg does not exist: full
If we then take a look at man rsync under --link-dest, we find:
If DIR is a relative path, it is relative to the destination directory.
And there it is. full is relative to the current directory. Relative to the destination directory, it would be ../full.
If you try again with rsync -avh --link-dest=../full orig/ orig_1, you get what you expect:
$ du -sh *
149M full
149M orig
232K orig_1
$ du -sh .
298M .
Note that, when counted individually, the directories still appear take up the full space:
$ du -sh orig_1
149M orig_1
This is because du keeps track of files it's already seen, and avoids counting them twice.
--link-dest takes a path relative to the destination. You want --link-dest=../orig.
Standard Unix filesystems do not allow hard links to directories, except for the special . and .. links. --link-dest only creates hard links for files, the rest of the directory structure is recreated as real directories.
And even if hard links were allowed to directories, du would still show the full size of each link. When using hard links, there's no distinction between the original and the link, they're each just names that refer to a particular inode, and du would scan them equivalently.

Why is the system CPU time (% sy) high?

I am running a script that loads big files. I ran the same script in a single core OpenSuSe server and quad core PC. As expected in my PC it is much more faster than in the server. But, the script slows down the server and makes it impossible to do anything else.
My script is
for 100 iterations
Load saved data (about 10 mb)
time myscript (in PC)
real 0m52.564s
user 0m51.768s
sys 0m0.524s
time myscript (in server)
real 32m32.810s
user 4m37.677s
sys 12m51.524s
I wonder why "sys" is so high when i run the code in server. I used top command to check the memory and cpu usage.
It seems there is still free memory, so swapping is not the reason. % sy is so high, its probably the reason for the speed of server but I dont know what is causing % sy so high. The process that is using highest percent of CPU (99%) is "myscript". %wa is zero in the screenshot but sometimes it gets very high (50 %).
When the script is running, load average is greater than 1 but have never seen to be as high as 2.
I also checked my disc:
strt:~ # hdparm -tT /dev/sda
/dev/sda:
Timing cached reads: 16480 MB in 2.00 seconds = 8247.94 MB/sec
Timing buffered disk reads: 20 MB in 3.44 seconds = 5.81 MB/sec
john#strt:~> df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 245G 102G 131G 44% /
udev 4.0G 152K 4.0G 1% /dev
tmpfs 4.0G 76K 4.0G 1% /dev/shm
I have checked these things but I am still not sure what is the real problem in my server and how to fix it. Can anyone identify a probable reason for the slowness? What could be the solution?
Or is there anything else I should check?
Thanks!
You're getting a high sys activity because the load of the data you're doing takes system calls that happen in kernel. To resolve your slowness problems without upgrading the server might be possible. You can modify scheduling priority. See the man pages for nice and renice. See here and especially:
Niceness values range from -20 (the highest priority, lowest niceness) and 19 (the lowest priority, highest niceness).
$ ps -lp 941
F S UID PID PPID C PRI NI ADDR SZ WCHAN TTY TIME CMD
4 S 0 941 1 0 70 -10 - 1713 poll_s ? 00:00:00 sshd
$ nice -n 19 ./test.sh
My niceness value is 19
$ renice -n 10 -p 941
941 (process ID) old priority -10, new priority 10

why is cygwin so slow

I run a script on Ubuntu, and tested its time:
$ time ./merger
./merger 0.02s user 0.03s system 99% cpu 0.050 total
it spent less than 1 second.
but if I used cygwin:
$ time ./merger
real 3m22.407s
user 0m0.367s
sys 0m0.354s
It spent more than 3 minutes.
Why did this happen? What shall I do to increase the executing speed on cygwin?
As others have already mentioned, Cygwin's implementation of fork and process spawning on Windows in general are slow.
Using this fork() benchmark, I get following results:
rr-#cygwin:~$ ./test 1000
Forked, executed and destroyed 1000 processes in 5.660011 seconds.
rr-#arch:~$ ./test 1000
Forked, executed and destroyed 1000 processes in 0.142595 seconds.
rr-#debian:~$ ./test 1000
Forked, executed and destroyed 1000 processes in 1.141982 seconds.
Using time (for i in {1..10000};do cat /dev/null;done) to benchmark process spawning performance, I get following results:
rr-#work:~$ time (for i in {1..10000};do cat /dev/null;done)
(...) 19.11s user 38.13s system 87% cpu 1:05.48 total
rr-#arch:~$ time (for i in {1..10000};do cat /dev/null;done)
(...) 0.06s user 0.56s system 18% cpu 3.407 total
rr-#debian:~$ time (for i in {1..10000};do cat /dev/null;done)
(...) 0.51s user 4.98s system 21% cpu 25.354 total
Hardware specifications:
cygwin: Intel(R) Core(TM) i7-3770K CPU # 3.50GHz
arch: Intel(R) Core(TM) i7-4790K CPU # 4.00GHz
debian: Intel(R) Core(TM)2 Duo CPU T5270 # 1.40GHz
So as you see, no matter what you use, Cygwin will always operate worse. It loses hands down even to worse hardware (cygwin vs. debian in this benchmark, as per this comparison).

Resources