I'm trying to capture IMCP ping results using pythonping on my Raspberry PI.
The documentation says it needs to be ran as root (undesirable), or I can use executor to work around this and run as user Pi using the executor.Communicator function.
I cannot find a working example of how to do this.
My test code is simple
from executor import execute
from pythonping import ping
# get average of 10 pings to host
# gives permission error
#ping("1.1.1.1",size=40,count=10)
# test of executor: capture result to variable
c=execute("hostname",capture=True)
print(c)
Somehow I use executor to process the ping request as a wrapper to get around needing to be root.
I'd love for someone to show me a working example of how to do this.
pythonping is exactly what I want because I can tell it to give me the average of 10 pings to a host.
I resolved this by using the icmplib library instead because it fully manages the exchanges and the structure of ICMP packets.
from executor import execute
from icmplib import ping
host=ping("1.1.1.1", count=4, privileged=False)
print(host.avg_rtt)
Related
My goal is to get the current CPU usage ##% returned to display on a ePaper HAT display. I'm currently using python3.
I've tried 2 solutions found on stackoverflow and they produced unexpected results.
import os
def getCPUuse():
return str(os.popen("top -n1 | awk '/Cpu\(s\):/ {print $2}'").readline())
print(getCPUuse())
I'm getting "TERM environment variable not set." outputted in the shell when running this proposed code.
I'm not sure how to make this message go away, as the other proposed solutions to "TERM environment variable not set." is to set the variable XTERM but the variable seems to be set already. When entering set | grep TERM into terminal, "TERM=xterm-256color" is returned.
import psutil
def get_CPU():
return psutil.cpu_percent()
print(get_CPU())
Here is another proposed code but running this always returns "0.0". I'm suspicious that the CPU load is constantly 0.0 so I used
htop
in the terminal, and it looks like average CPU load was ~2.8%, not 0.
Perhaps I should use from gpiozero import LoadAverage instead? I'm new to programming hardware. If someone with more experience can offer pointers on whether https://gpiozero.readthedocs.io/en/stable/api_internal.html#loadaverage is promising, that'd be helpful too.
I'm trying to keep solutions based on Python3.
I'm following a wireshark course that requires me to write a simple pyshark script. The problem is the lecturer uses a Linux VM and the network name is given through ifconfig i.e eth0.
Since I'm operating on Windows, and unfamiliar with pyshark, I'm wondering what suffices for the network name while writing this argument. I can't seem to pinpoint specifically how my network adapter is referred to with the 'eth' naming convention. Would something simply like the index number i.e 3 suffice?
I've tried variations of eth to 'guess' the right network. Proof is in running the script and seeing it print the results. Otherwise its been a guessing game. So far nothing is printing out after running my script which i use as validation for not getting my adapter value right. (There is only one active network adapter on my pc).
Would appreciate someone with knowledge of pyshark arguments chiming in. This is the script:
import pyshark
capture = pyshark.LiveCapture(interface= 'eth3')
for packet in capture.sniff_continuously(packet_count=5):
try:
print('Source = ' + packet['ip'].src)
print('Destination =' + packet['ip'].dst)
except:
if LiveCapture == 0:
print ("No Packets on this interface")
print("~Fin~")
exit()
I found that using the last name of my adapter i.e Ethernet, worked. Seems like pyshark recognizes the interface based on the name ascribed to it by you or your system. For example: Network Connections -> Ethernet.
Or through ipconfig on command line -> Ethernet Adapter Ethernet
I am working on a python 3.6 based script to monitor dns resolution time, using dns python. I need to understand what is the best approach to get the resolution time. I am currently using the following:
Method:1
import dns.resolver
import time
dns_start = time.perf_counter()
answers = dns.resolver.query(domain, qtype)
dns_end = time.perf_counter()
print("DNS record is: " ,answers.rrset)
print('DNS time =' ,(dns_end - dns_start)* 1000,"ms")
Method 2
import dns.resolver
import time
answers = dns.resolver.query(domain, qtype)
print("DNS record is: " ,answers.rrset)
print('DNS time =' , answers.response.time * 1000,"ms")
Thanks
"It depends" as the famous saying says.
You need to define what is "best" for you in "best approach". The two methods do things correctly, just different things.
First one will count both network time, remote server processing, and then local generation and parsing of DNS messages, including time spent by Python in the dnspython library.
The second method takes only into account the network and remote processing time: if you search for response_time in https://github.com/rthalley/dnspython/blob/master/dns/query.py you will see it is computed as a difference between the time just before sending the message (hence after it is been build, so the generation time will not be included) and just after it has been received (hence before it is parsed, so parsing time will not be included).
Second method can basically test the remote server performances, irrespective of your local program (including Python itself and the dnspython library) performances (but the network time needed between you and remote server will be counted in).
First method shows the perceived time, as it includes all time needed in the code to do something, that is build the DNS packet and parse the result, actions without which the calling application could do nothing with the DNS content.
I have an Amazon EC2 instance running and use twurl to connect to the Twitter /statuses/filter.json streaming API to collect various sporting tweets.
It all works pretty nicely to be honest, but as a novice I cannot for the life of me figure out how to only run the process for say 100 tweets, or 5 minutes at a time.
In the Ubuntu terminal, I run the following command:
sudo bash stream.sh
Which calls the bash script containing the following code:
twurl -t -d track=NHL language=en -H stream.twitter.com /1.1/statuses/filter.json > tweets.json
If I manually end the process by pressing CTRL+C, this works perfectly. However, what I would really like is to be able to collect 100 tweets at certain points of the day. Any ideas how I may build this in? I've Googled it but have so far come up short...
Have worked it out!
Ended up being massively simple:
timeout 5m bash stream.sh
I am writing a perl script which issues a ping to a certain IP address, with a ping size of 65000 and count of 1000.
Now when the remote PC is up, things are okay. The ping succeeds and ends after sending 1000 pkts.
However, in case of failure, it always returns "Destination host unreachable". Ping keeps on trying for too long trying to send arp requests/ping requests before it eventually gives up with a 100% pkt loss string.
My question is, how can I make ping to exit if lets say the initial 100 pings itself do not generate a response. I do not want to wait for too long in case initial pings itself fail. I want ping to exit. How do I do this?
I am currently using Linux for my script. Please let me know how to do this for
Linux
Windows.
[Please note the size of the ping pkt can vary. So I want a solution which is independent of the size/count]
I would recommend using the Net::Ping module which gives you the flexibility to control individual pings directly from your script.