I am working on a python 3.6 based script to monitor dns resolution time, using dns python. I need to understand what is the best approach to get the resolution time. I am currently using the following:
Method:1
import dns.resolver
import time
dns_start = time.perf_counter()
answers = dns.resolver.query(domain, qtype)
dns_end = time.perf_counter()
print("DNS record is: " ,answers.rrset)
print('DNS time =' ,(dns_end - dns_start)* 1000,"ms")
Method 2
import dns.resolver
import time
answers = dns.resolver.query(domain, qtype)
print("DNS record is: " ,answers.rrset)
print('DNS time =' , answers.response.time * 1000,"ms")
Thanks
"It depends" as the famous saying says.
You need to define what is "best" for you in "best approach". The two methods do things correctly, just different things.
First one will count both network time, remote server processing, and then local generation and parsing of DNS messages, including time spent by Python in the dnspython library.
The second method takes only into account the network and remote processing time: if you search for response_time in https://github.com/rthalley/dnspython/blob/master/dns/query.py you will see it is computed as a difference between the time just before sending the message (hence after it is been build, so the generation time will not be included) and just after it has been received (hence before it is parsed, so parsing time will not be included).
Second method can basically test the remote server performances, irrespective of your local program (including Python itself and the dnspython library) performances (but the network time needed between you and remote server will be counted in).
First method shows the perceived time, as it includes all time needed in the code to do something, that is build the DNS packet and parse the result, actions without which the calling application could do nothing with the DNS content.
Related
I need to measure the response time for a file export using Trueclient protocol in loadrunner.After i click on the export button the file will get downloaded. But i am not able to measure the time for the download accurately.
Pull that data from the HTTP request log, which will show the download request, and, if the w3c time-taken value is included in the log, the time required to fulfill the download.
You can process the log at the end of the test for the response time data. If you need to, you cam import a set of datapoints into analysis for representation with the rest of your data. You might want to consider a normalized value for your download, instead of a raw response time. I imagine that the files are of different sizes, so naturally they will have different download times. However, if you divide download bytes with time (in seconds), then you will have a normalized measurement of bytes per second which then allows you to compare one download to the next for consistent operation.
Also, keep in mind that since you are downloading a file, writing to a local disk, for (presumably) multiple users on a host, you will face the risk of turning your local file system into a bottleneck. You can see this same effect if you turn up logging on all users to the highest level and run your test. The wait for lock and wait for write, plus the actual writing of data, becomes a drag anchor to the performance of your virtual user. This is why the recommended log level is "log on error" or send the error to the output window of the controller via lr_output_message() or lr_vuser_status_message(). Consider a control load generator of the same hardware definition as the others with only a single virtual user of this type on it. If the control group and global group degrade together then you have an app issue. If your control user does not degrade, but your other users do, then you have a test bed induced influence on your results.
These are all issues independent of the tool you are using for the test.
Lately i have seen high time to first byte for my sites.
Most of the time it is by javascript. A test at webpagetest.org usually shows like....
URL: http://example.com/
Loaded by: http://example.com/some-kind-of-javascript.js
When i remove that javascript then anothe javascript appears in that place.
What does loaded by exactly mean??
Check example test result....
https://www.webpagetest.org/result/190729_JY_cb028989b0f44671fba830c9eaca29d7/1/details/#waterfall_view_step1
I'm not sure why you think it's Javascript that is the problem? It looks to me like it's the initial HTML that is causing the problem.
Time to First byte for a resource is the time take from after sending the request (e.g. GET /) until it receives the first byte back. It excludes the DNS lookup, TCP connection and SSL handshake time, so really is a measure of the time take to start receiving that resource. Note that the "first byte" time at the top of the waterfall is the full end to end time, including DNS/TCP/SSL and any redirects, but the TTFB for each resource this is split out more.
I don't know how your home page is created - I would guess it's not a static page so whatever is generating this (PHP?) is taking too long. Whether this is due to bad backend code, an under-resourced server, a slow database, or something else is impossible to say from the outside. I would suggest getting in touch with your hosting provider and/or reviewing your code and server.
Rfid Readers perform switches between antennas while using multiple antennas. Reader runs one antenna while others sleeping and switches one by one. It makes it fast so running one antenna at a time doesn't matter. According to my observations, the time for every switch is 1 second.
(After sometime I realised this 1 second is only for Motorola FX7500. Most other readers do it the right way, light fast like in miliseconds)
That is what I know so far.
Now, in my specific application I need this procedure to run faster, like 200ms instead of 1s.
Is this value changeable? If so, which message and parameter in LLRP can modify this value?
Actually the 1 second problem is with MotorolaFX7500 reader. By examining LLRP messages that Motorola's own library generates between FC7500, I discovered there are vendor specific parameters that can be used via custom extensions fields of LLRP. These params and settings can be found in Motorola Readers' software guide. This switch time is one of these vendor specific parameters, it's not a parameter of generic LLRP. A piece of code generating LLRP message including the custom extension with the proper format, solved my issue.
The scope of this work is to query two machines' high resolution timer at the "same time" and get the time clock inaccuracy between both systems. This is done by having the 3rd machine sending an SNMP-get for a custom OID where the SNMP agent is configured to invoke a perl script to return the high-resolution timer. All works fine as in the snmp-get manages to return the expected result. However it appears that regardless of the frequency of the snmpget queries, the snmpagent only performs a fresh query to the script at ~5 second intervals. I am running NET SNMP version 5.4.3. After some research I've seen that this is typical of NET SNMP to cache the results and this is done on MIB tree basis. There is MIB (nsCacheTable) with the respective intervals by querying snmpwalk to 1.3.6.1.4.1.8072.1.5.3. Apparently the values can be changed to 0 to remove caching. Some of these are read-only though. Although I've set a few of them to 0 using SNMPset (as most of them return a Bad object type error).
I know very basic SNMP so I followed a guide online and mapped the below custom OID to the perl script with this line in the snmpd.conf.
extend .1.3.6.1.4.1.27654.3 return_date /usr/bin/perl [directory]/[perl script name].pl
Then the actual OID containing the output (time in epoch) is:
iso.3.6.1.4.1.27654.3.3.1.1.11.114.101.116.117.114.110.95.100.97.116.101
Anyone has any ideas how I can disable the caching for this OID?
Thanks in advance.
---EDIT---
According to this blog post, in order to avoid disabling the caching - one can instead use pass-persist scripts which look more complex to implement at first glance. The perl script I used to call is the below:
#!/usr/bin/perl
# THIS SCRIPT RETURNS THE EPOCH TIME OF DAY IN MICROSECONDS
use Time::HiRes qw(gettimeofday);
($s, $usec) = gettimeofday();
$newtime = $s.$usec;
print ($newtime);
Anyone can provide help in converting this script for pass-persist and how the snmpd.conf should look like?
I'm using IPython.parallel to process a large amount of data on a cluster. The remote function I run looks like:
def evalPoint(point, theta):
# do some complex calculation
return (cost, grad)
which is invoked by this function:
def eval(theta, client, lview, data):
async_results = []
for point in data:
# evaluate current data point
ar = lview.apply_async(evalPoint, point, theta)
async_results.append(ar)
# wait for all results to come back
client.wait(async_results)
# and retrieve their values
values = [ar.get() for ar in async_results]
# unzip data from original tuple
totalCost, totalGrad = zip(*values)
avgGrad = np.mean(totalGrad, axis=0)
avgCost = np.mean(totalCost, axis=0)
return (avgCost, avgGrad)
If I run the code:
client = Client(profile="ssh")
client[:].execute("import numpy as np")
lview = client.load_balanced_view()
for i in xrange(100):
eval(theta, client, lview, data)
the memory usage keeps growing until I eventually run out (76GB of memory). I've simplified evalPoint to do nothing in order to make sure it wasn't the culprit.
The first part of eval was copied from IPython's documentation on how to use the load balancer. The second part (unzipping and averaging) is fairly straight-forward, so I don't think that's responsible for the memory leak. Additionally, I've tried manually deleting objects in eval and calling gc.collect() with no luck.
I was hoping someone with IPython.parallel experience could point out something obvious I'm doing wrong, or would be able to confirm this in fact a memory leak.
Some additional facts:
I'm using Python 2.7.2 on Ubuntu 11.10
I'm using IPython version 0.12
I have engines running on servers 1-3, and the client and hub running on server 1. I get similar results if I keep everything on just server 1.
The only thing I've found similar to a memory leak for IPython had to do with %run, which I believe was fixed in this version of IPython (also, I am not using %run)
update
Also, I tried switching logging from memory to SQLiteDB, in case that was the problem, but still have the same problem.
response(1)
The memory consumption is definitely in the controller (I could verify this by: (a) running the client on another machine, and (b) watching top). I hadn't realized that non SQLiteDB would still consume memory, so I hadn't bothered purging.
If I use DictDB and purge, I still see the memory consumption go up, but at a much slower rate. It was hovering around 2GB for 20 invocations of eval().
If I use MongoDB and purge, it looks like mongod is taking around 4.5GB of memory and ipcluster about 2.5GB.
If I use SQLite and try to purge, I get the following error:
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/hub.py", line 1076, in purge_results
self.db.drop_matching_records(dict(completed={'$ne':None}))
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/sqlitedb.py", line 359, in drop_matching_records
expr,args = self._render_expression(check)
File "/usr/local/lib/python2.7/dist-packages/IPython/parallel/controller/sqlitedb.py", line 296, in _render_expression
expr = "%s %s"%null_operators[op]
TypeError: not enough arguments for format string
So, I think if I use DictDB, I might be okay (I'm going to try a run tonight). I'm not sure if some memory consumption is still expected or not (I also purge in the client like you suggested).
Is it the controller process that is growing, or the client, or both?
The controller remembers all requests and all results, so the default behavior of storing this information in a simple dict will result in constant growth. Using a db backend (sqlite or preferably mongodb if available) should address this, or the client.purge_results() method can be used to instruct the controller to discard any/all of the result history (this will delete them from the db if you are using one).
The client itself caches all of its own results in its results dict, so this, too, will result in growth over time. Unfortunately, this one is a bit harder to get a handle on, because references can propagate in all sorts of directions, and is not affected by the controller's db backend.
This is a known issue in IPython, but for now, you should be able to clear the references manually by deleting the entries in the client's results/metadata dicts and if your view is sticking around, it has its own results dict:
# ...
# and retrieve their values
values = [ar.get() for ar in async_results]
# clear references to the local cache of results:
for ar in async_results:
for msg_id in ar.msg_ids:
del lview.results[msg_id]
del client.results[msg_id]
del client.metadata[msg_id]
Or, you can purge the entire client-side cache with simple dict.clear():
view.results.clear()
client.results.clear()
client.metadata.clear()
Side note:
Views have their own wait() method, so you shouldn't need to pass the Client to your function at all. Everything should be accessible via the View, and if you really need the client (e.g. for purging the cache), you can get it as view.client.