Why do traceroute times sometimes go down between hops? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I'm seeing similar output to that on the traceroute manpage, but I've always been curious on why ping times go down between hops? I.e., I'd always expect values to increment from hop to hop like:
[yak 71]% traceroute nis.nsf.net.
traceroute to nis.nsf.net (35.1.1.48), 64 hops max, 38 byte packet
1 helios.ee.lbl.gov (128.3.112.1) 19 ms 19 ms 0 ms
2 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 39 ms 19 ms
3 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 39 ms 19 ms
4 ccngw-ner-cc.Berkeley.EDU (128.32.136.23) 39 ms 40 ms 39 ms
5 ccn-nerif22.Berkeley.EDU (128.32.168.22) 39 ms 39 ms 39 ms
6 128.32.197.4 (128.32.197.4) 40 ms 59 ms 59 ms
7 131.119.2.5 (131.119.2.5) 59 ms 59 ms 59 ms
8 129.140.70.13 (129.140.70.13) 99 ms 99 ms 80 ms
9 129.140.71.6 (129.140.71.6) 139 ms 239 ms 319 ms
10 129.140.81.7 (129.140.81.7) 220 ms 199 ms 199 ms
11 nic.merit.edu (35.1.1.48) 239 ms 239 ms 239 ms
But what is the reasoning for:
[yak 72]% traceroute allspice.lcs.mit.edu.
traceroute to allspice.lcs.mit.edu (18.26.0.115), 64 hops max
1 helios.ee.lbl.gov (128.3.112.1) 0 ms 0 ms 0 ms
2 lilac-dmc.Berkeley.EDU (128.32.216.1) 19 ms 19 ms 19 ms
3 lilac-dmc.Berkeley.EDU (128.32.216.1) 39 ms 19 ms 19 ms
4 ccngw-ner-cc.Berkeley.EDU (128.32.136.23) 19 ms 39 ms 39 ms
5 ccn-nerif22.Berkeley.EDU (128.32.168.22) 20 ms 39 ms 39 ms
6 128.32.197.4 (128.32.197.4) 59 ms 119 ms 39 ms
7 131.119.2.5 (131.119.2.5) 59 ms 59 ms 39 ms
8 129.140.70.13 (129.140.70.13) 80 ms 79 ms 99 ms
9 129.140.71.6 (129.140.71.6) 139 ms 139 ms 159 ms
10 129.140.81.7 (129.140.81.7) 199 ms 180 ms 300 ms
11 129.140.72.17 (129.140.72.17) 300 ms 239 ms 239 ms
12 * * *
13 128.121.54.72 (128.121.54.72) 259 ms 499 ms 279 ms
14 * * *
15 * * *
16 * * *
17 * * *
18 ALLSPICE.LCS.MIT.EDU (18.26.0.115) 339 ms 279 ms 279 ms
Why is the time from hops 3 to 4 go from 39ms to 19ms?

They are not just different "hops" but indeed totally different events, at different times, and possibly under different conditions.
The way traceroute works is, it sends a packet with at TTL of 1 and sees how long it takes for an answer to arrive. The program repeats this three times to account for some variation that is possible and normal (note that even the three numbers in "the same hop" are sometimes very different, since they are totally separate things, too).
Then, traceroute sends a packet with a TTL of 2 (and then 3, and 4, etc.) and sees how long it takes for an answer to arrive. In the mean time, the load on the first router may have lowered, so your packets are actually forwarded faster. Or, the physical cable where your packets go out may not be busy because your roommate just finished downloading his porn (whereas before your network card would have to wait for the cable to get idle). This is something that happens all the time, and you have no way of knowing or controlling that.
There is no way of going back in time, so you can't change the fact that a second ago it took a bit longer for your packets to make it through. You're probing at two different times, so all you can get is a reasonable guestimate. Yes, generally, the time from hop to hop should gradually increase, but it needs not.
That doesn't mean that the round trip time is really going down between hops 3 and 4. It only means that it went down in the meantime while you you independently tried to get an estimate of how long 3 hops and 4 hops might take.

Most likely random delays between the lines. Note that on both the second and third packet sent for hops 3-4 go from 19ms to 39ms. They're all on the same network anyway, so most delays would be due to processing/congestion.
Although, another case that you occasionally see is that some servers provide higher QoS to ICMP messages, superficially giving the appearance of better performance.

On a packet-switched networks packet delays are not deterministic, particularly if there's congestion in the network. Each hop of the traceroute test is a separate test, so varying network conditions can easily cause later tests to have better latency than the nearer hops.
That is to say, traceroute doesn't just send out three packets and then somehow figure out what the time was at each hop. It sends out packets deliberately designed to timeout after n hops and measures the time as n increases.

Related

chrony: Find the most recent sync time

I am trying to find out the most recent time chrony checked to see if we are in sync. I can see chrony's sources with the following command:
~]$ chronyc sources
210 Number of sources = 3
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* a.b.c 1 10 377 32m -8638us[-8930us] +/- 103ms
^- d.e.f 1 10 366 77m +2928us[+1960us] +/- 104ms
^+ g.h.i 2 10 377 403 -14ms[ -14ms] +/- 137ms
The * symbol indicates the server that we synced to, and the + symbol indicates "acceptable" servers. I can see that chrony is indeed polling both a.b.c and g.h.i.
So what I want to know is if I can use the minimum between the two LastRx values as the most recent time that we confirmed we are in sync. In this example we are synced to a.b.c, but 403s < 32m, so does that mean that 403 seconds ago chrony checked against g.h.i and determined we are in sync. In other words, is 403 seconds the last time I know system-time was in sync, or is it 32 minutes ago?
There is some documentation on the chronyc command here: https://chrony.tuxfamily.org/doc/3.5/chronyc.html

mpm configuration for httpd

I run a website with 5 httpd servers(Centos 7) in EC2, type is m3.2xlarge.
The servers are configured with load balancer.
Gradually the server memory keeps going higher in all the instances.
For example:
Memory usage in few seconds after restarting the httpd service:
[centos#ip-10-0-1-77 ~]$ while sleep 1; do free -m; done
total used free shared buff/cache available
Mem: 29741 2700 26732 36 307 26728
Swap: 0 0 0
total used free shared buff/cache available
Mem: 29741 2781 26651 36 307 26647
Swap: 0 0 0
total used free shared buff/cache available
Mem: 29741 2820 26613 36 307 26609
Swap: 0 0 0
[centos#ip-10-0-1-77 ~]$
.
.
.
This is what i see after an hour:
[centos#ip-10-0-1-77 ~]$ free -m
total used free shared buff/cache available
Mem: 29741 29092 363 41 284 346
Swap: 0 0 0
Like above it goes and consumes all the memory(30GB) within an hour.
To avoid this I started using worker mpm configuration.
The following configuration is what I have added at the bottom of /etc/httpd/httpd.conf.
<IfModule mpm_worker_module>
MaxRequestWorkers 2500
MaxSpareThreads 250
MinSpareThreads 75
ServerLimit 100
StartServers 3
ThreadsPerChild 25
</IfModule>
Can Someone help and suggest me the right configuration to utilize the RAM memory properly in all the instances?
A standard Apache process takes up about 12 MB of RAM. If you have 30 GB reserved for Apache you will never reach that with a serverlimit of 100 (=100*12MB=1200MB=1,2GB). So I assume that Apache isn't taking up all that memory.
Is there an application that is involved or a DB? Those can take up larger amounts of RAM.
For your servertuning.conf (or httpd.conf since you put it there):
<IfModule mpm_worker_module>
#max amount of requests one worker handles before it's forced to close, 2,5k seems almost a little low
MaxRequestsperChild 2500
#maximum number of worker threads which are kept spare
#250 seems quite high, but really depends on the traffic you are experiencing, we normally don't use more than 50-75
MaxSpareThreads 250
#minimum number of worker threads which are kept spare
#really high, only useful if you often experience higher bursts of traffic
#otherwise you should be fine with 15-30, maybe 50 if you experience higher fluctuation -> bigger bursts of requests
MinSpareThreads 75
#upper limit on the configurable number of threads per child process
#you have to increase this if you want more than 64 ThreadsPerChild
ThreadLimit 64
#maximum number of simultaneous client connections
#that is really low! --> increase that, definitely! We run on 1000, so about 12GB max
MaxClients 100
#initial number of server processes to start
#3 is really low, if you expected your server to be flodded with requests the second you start it
#maybe turn it up a little to around 20 or even 50 if you receive lots of traffic right after a restart
StartServers 3
#number of worker threads created by each child proces
#25 threads per worker is not tooo much, but at some point the administration of xx threads gets more expensive than creating new ones
#would suggest to leave it at 25 or turn it up to around 40
ThreadsPerChild 25
</IfModule>
Notice that I changed ServerLimit to MaxClients and MaxRequestWorkers to MaxRequestsPerChild, because as far as I know those are the terms used in mpm-worker.
Additionally you can change following variables:
#KeepAlive: Whether or not to allow persistent connections (more than
#one request per connection). Set to "Off" to deactivate.
#if it's on, leave it there
KeepAlive On
#MaxKeepAliveRequests: The maximum number of requests to allow
#during a persistent connection. Set to 0 to allow an unlimited amount.
#We recommend you leave this number high, for maximum performance.
#default=100, but you can turn that up if your sites contain a lot of item (img, css, ...)
#we are using about 20*<average object-count per site> = 600
MaxKeepAliveRequests 600
#KeepAliveTimeout: Number of seconds to wait for the next request from the
#same client on the same connection.
#would recommend to decrease that, otherwise you could become a victim of slow-dos attacks
#default is 15, we are running just fine on 5
KeepAliveTimeout 5
To further prevent slow-dos or piling up of open sessions, you can use mod_reqtimeout:
<IfModule mod_reqtimeout.c>
# allow 10s timeout for the headers and allow 1s more until 20s upon receipt of 1000 bytes.
# almost the same with the body, except that it is tricky to
# limit the request timeout within the body at all - it may take time to generate the body.
# below are the default values
#RequestReadTimeout header=10-20,MinRate=1000 body=20,MinRate=1000
# and this is how I'd set them with today's internet speed
# deduct the according numbers from explanation above...
RequestReadTimeout header=2-5,MinRate=100000 body=5-10,MinRate=1000000
</IfModule>
If that's not enough to help your RAM-issues (if they are really caused by Apache), use the tools of your server's OS accordingly to find out what is taking up all the RAM --> TOOLS

Configuring Snap for performance

I'm just playing with the Snap framework and wanted to see how it performs against other frameworks (under completely artificial circumstances).
What I have found is that my Snap application tops out at about 1500 requests/second (the app is simply snap init; snap build; ./dist/app/app, ie. no code changes to the default app created by snap):
$ ab -n 20000 -c 500 http://127.0.0.1:8000/
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: Snap/0.9.5.1
Server Hostname: 127.0.0.1
Server Port: 8000
Document Path: /
Document Length: 721 bytes
Concurrency Level: 500
Time taken for tests: 12.845 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 17140000 bytes
HTML transferred: 14420000 bytes
Requests per second: 1557.00 [#/sec] (mean)
Time per request: 321.131 [ms] (mean)
Time per request: 0.642 [ms] (mean, across all concurrent requests)
Transfer rate: 1303.07 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 44 287.6 0 3010
Processing: 6 274 153.6 317 1802
Waiting: 5 274 153.6 317 1802
Total: 20 318 346.2 317 3511
Percentage of the requests served within a certain time (ms)
50% 317
66% 325
75% 334
80% 341
90% 352
95% 372
98% 1252
99% 2770
100% 3511 (longest request)
I then fired up a Grails application, and it seems like Tomcat (once the JVM warms up) can take a bit more load:
$ ab -n 20000 -c 500 http://127.0.0.1:8080/test-0.1/book
This is ApacheBench, Version 2.3 <$Revision: 1706008 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 127.0.0.1 (be patient)
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: Apache-Coyote/1.1
Server Hostname: 127.0.0.1
Server Port: 8080
Document Path: /test-0.1/book
Document Length: 722 bytes
Concurrency Level: 500
Time taken for tests: 4.366 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 18700000 bytes
HTML transferred: 14440000 bytes
Requests per second: 4581.15 [#/sec] (mean)
Time per request: 109.143 [ms] (mean)
Time per request: 0.218 [ms] (mean, across all concurrent requests)
Transfer rate: 4182.99 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 67 347.4 0 3010
Processing: 1 30 31.4 21 374
Waiting: 0 26 24.4 20 346
Total: 1 97 352.5 21 3325
Percentage of the requests served within a certain time (ms)
50% 21
66% 28
75% 35
80% 42
90% 84
95% 230
98% 1043
99% 1258
100% 3325 (longest request)
I'm guessing that a part of this could be the fact that Tomcat seems to reserve a lot of RAM and can keep/cache some methods. During this experiment Tomcat was using in excess of 700mb or RAM while Snap barely approached 70mb.
Questions I have:
Am I comparing apples and oranges here?
What steps would one take to optimise Snap for throughput/speed?
Further experiments:
Then, as suggested by mightybyte, I started experimenting with +RTS -A4M -N4 options. The app was able to serve just over 2000 requests per second (about 25% increase).
I also removed the nested templating and served a document (same size as before) from the top level tpl file. This increased the performance to just over 7000 requests a second. The memory usage went up to about 700MB.
I'm by no means an expert on the subject so I can only really answer your first question, and yes you are comparing apples and oranges (and also bananas without realizing it).
First off, it looks like you are attempting to benchmark different things, so naturally, your results will be inconsistent. One of these is the sample Snap application and the other is just "a Grails application". What exactly are each of these things doing? Are you serving pages? Handling requests? The difference in applications will explain the differences in performance.
Secondly, the difference in RAM usage also shows the difference in what these applications are doing. Haskell web frameworks are very good at handling large instances without much RAM where other frameworks, like Tomcat as you saw, will be limited in their performance with limited RAM. Try limiting both applications to 100mb and see what happens to your performance difference.
If you want to compare the different frameworks, you really need to run a standard application to do that. Snap did this with a Pong benchmark. The results of an old test (from 2011 and Snap 0.3) can be seen here. This paragraph is extremely relevant to your situation:
If you’re comparing this with our previous results you will notice that we left out Grails. We discovered that our previous results for Grails may have been too low because the JVM had not been given time to warm up. The problem is that after the JVM warms up for some reason httperf isn’t able to get any samples from which to generate a replies/sec measurement, so it outputs 0.0 replies/sec. There are also 1000 connreset errors, so we decided the Grails numbers were not reliable enough to use.
As a comparison, the Yesod blog has a Pong benchmark from around the same time that shows similar results. You can find that here. They also link to their benchmark code if you would like to try to run a more similar benchmark, it is available on Github.
The answer by jkeuhlen makes good observations relevant to your first question. As to your second question, there are definitely things you can play with to tune performance. If you look at Snap's old raw result data, you can see that we were running the application with +RTS -A4M -N4. The -N4 option tells the GHC runtime to use 4 threads. (Note that you have to build the application with -threaded to do this.) The -A4M option sets the size of the garbage collector's allocation area. Our experiments showed that these two seemed to have the biggest impact on performance. But that was done a long time ago and GHC has changed a lot since then, so you probably want to play around with them and find what works best for you. This page has in-depth information about other command line options available to control GHC's runtime if you wish to do more experimentation.
A little work was done last year on updating the benchmarks. If you're interested in that, look around the different branches in the snap-benchmarks repository. It would be great to get more help on a new set of benchmarks.

I need an idea for a program with threads

This might sound funny but I got a homework and I can`t make any sense of it.
The statement sounds like this:
"Find the 10 largest numbers in an array of 100.000 randomly generated integers. You will use threads to compare 2 numbers. A daemon thread will print at regular intervals the progress and the number of unchecked integers left."
I know its not appropriate to ask for help on the forum regarding a homework but I am really REALLY frustrated .... I just cant figure out why, and how, should I use threads to deal with number comparison ..... Especially when it is about 100.000 integers. Even if I go through the list with a simple for using a max variable and printing out all the values it only takes about 150 milliseconds, at most(i tried)!!
Could you at least give me a starting idea on it ???
Sorry for wasting your time!
--CONTINUED--
As I said in a reply, braking up the array into X chunks(no. of threads) would be a good idea if I would have to find only 1 element(the largest) but because I need to find the 10 largest elements, supposing one thread finds its max value in the chunk it is working on, and discards the rest, maybe one of the discarded ones would actually be larger than the rest of the elements in the other chunks. That is why I don`t think this would a good result.
Feel free to argue my point of view!
Each thread can iterate through 100,000 / X numbers (where X is the number of threads) and keep track of the top 10 numbers in that thread. Then, when all threads are done, you can merge the results.
Break the list of 100k numbers in to batches of some size. Then spawn a thread to do the checking on each of the batches. Then just merge the results.
The bonus part of this, is such a solution will easily scale to huge lists of numbers.
The reason you need to do it with threads for this problem is not because you can't solve it without threads, but that it's a good example of a threadable problem (namely, can be parallelized); and a good teaching example since the business logic is so simple so you can concentrate on threading work.
No matter how you slice it, finding the max in an unsorted array means a linear search. You could simply partition the data among the number of available threads, then find the max number among the values that the threads came up with.
Well, you want to put that list of integers in a threadsafe queue. Each thread processes the numbers by pop'ing off the top.
This is the almost the same algorithm you already wrote, but it the key is the threadsafe queue, which lets the threads pull data off of it without clobbering each others data.
When each thread is complete, the main thread should take the results and find the largest numbers between the threads.
EDIT
If each thread gets the 10 largest numbers in its chunk, then it doesn't matter what is in the rest of the array, since the other threads will find the largest in their own chunk. for example:
Array : numbers between 1 and 99
Chunk 1 : 99 98 97 ... 50
Chunk 2 : 49 48 47 ... 1
Thread one result: 99 98 97 96 95 94 93 92 91 90
Thread two result: 49 48 47 46 45 44 43 42 41 40
Merged result: 99 98 97 96 95 94 93 92 91 90 49 48 47 46 45 44 43 42 41 40
Top 10 from merge: 99 98 97 96 95 94 93 92 91 90
See it doesn't matter that chunk 2 has no numbers larger than chunk one.

How do I stress test a web form file upload?

I need to test a web form that takes a file upload.
The filesize in each upload will be about 10 MB.
I want to test if the server can handle over 100 simultaneous uploads, and still remain
responsive for the rest of the site.
Repeated form submissions from our office will be limited by our local DSL line.
The server is offsite with higher bandwidth.
Answers based on experience would be great, but any suggestions are welcome.
Use the ab (ApacheBench) command-line tool that is bundled with Apache
(I have just discovered this great little tool). Unlike cURL or wget,
ApacheBench was designed for performing stress tests on web servers (any type of web server!).
It generates plenty statistics too. The following command will send a
HTTP POST request including the file test.jpg to http://localhost/
100 times, with up to 4 concurrent requests.
ab -n 100 -c 4 -p test.jpg http://localhost/
It produces output like this:
Server Software:
Server Hostname: localhost
Server Port: 80
Document Path: /
Document Length: 0 bytes
Concurrency Level: 4
Time taken for tests: 0.78125 seconds
Complete requests: 100
Failed requests: 0
Write errors: 0
Non-2xx responses: 100
Total transferred: 2600 bytes
HTML transferred: 0 bytes
Requests per second: 1280.00 [#/sec] (mean)
Time per request: 3.125 [ms] (mean)
Time per request: 0.781 [ms] (mean, across all concurrent requests)
Transfer rate: 25.60 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 2.6 0 15
Processing: 0 2 5.5 0 15
Waiting: 0 1 4.8 0 15
Total: 0 2 6.0 0 15
Percentage of the requests served within a certain time (ms)
50% 0
66% 0
75% 0
80% 0
90% 15
95% 15
98% 15
99% 15
100% 15 (longest request)
Automate Selenium RC using your favorite language. Start 100 Threads of Selenium,each typing a path of the file in the input and clicking submit.
You could generate 100 sequentially named files to make looping over them easyily, or just use the same file over and over again
I would perhaps guide you towards using cURL and submitting just random stuff (like, read 10MB out of /dev/urandom and encode it into base32), through a POST-request and manually fabricate the body to be a file upload (it's not rocket science).
Fork that script 100 times, perhaps over a few servers. Just make sure that sysadmins don't think you are doing a DDoS, or something :)
Unfortunately, this answer remains a bit vague, but hopefully it helps you by nudging you in the right track.
Continued as per Liam's comment:
If the server receiving the uploads is not in the same LAN as the clients connecting to it, it would be better to get as remote nodes as possible for stress testing, if only to simulate behavior as authentic as possible. But if you don't have access to computers outside the local LAN, the local LAN is always better than nothing.
Stress testing from inside the same hardware would be not a good idea, as you would do double load on the server: Figuring out the random data, packing it, sending it through the TCP/IP stack (although probably not over Ethernet), and only then can the server do its magic. If the sending part is outsourced, you get double (taken with an arbitrary sized grain of salt) performance by the receiving end.

Resources