Python dns.resolver of a NS doesn't work on subdomains? - dnspython

I'm trying to get the NS of a subdomain, but I get a NoAnswer Exception.
I suspect it's because asking for the NS is only working on the root domain, but how can I do then?
Is there a way to achieve it using dnspython or do I have to remove each subpart until the NS work?

Here's my answer to my own question. I finally decided to go up the ladder and remove each sub parts of the domain up to the last dot (keeping the last two parts).
Here's the code:
def ns_lookup(domain):
parts = domain.split('.')
lookup = resolver.Resolver()
lookup.timeout = 3
ns_ips = None
while len(parts) >= 2:
try:
nameservers = [a.to_text() for a in lookup.query('.'.join(parts), 'NS')]
ns_ips = [resolver.query(ns)[0].to_text() for ns in nameservers]
if len(ns_ips) > 0:
return ns_ips
except exception.Timeout:
return []
except (resolver.NoAnswer, resolver.NXDOMAIN, resolver.NoNameservers):
pass
parts.pop(0)
return False
And it works that way:
ns_ips = ns_lookup(domain)
if ns_ips is False:
return False
if len(ns_ips) == 0:
return []
lookup = resolver.Resolver(configure=False)
lookup.timeout = 3
lookup.nameservers = ns_ips
lookup.query(domain, 'A') # The entry you want.
Hope that helps anyone else.

Related

kivyMD list update Icon

I am using the following function to create an MDlist TwoLineIconListItem in a scrollview. What I would like to do is change the icon in another function. I thought something like x.icon = 'New_icon' might work but didn't. Not sure where to look to get the desired result.
def rule_list(self):
'''Query of all rules and generates a list view under the rule tab....not really working all the way yet'''
db.execute('''SELECT * from rules''')
self.rows = db.fetchall()
for r in self.rows:
self.rule = f'{self.cfg["host"]}:{self.cfg["port"]}/api/firewall/filter/getRule/{r[2]}'
rules = TwoLineIconListItem(
text=r[1],
secondary_text=r[2],
on_release=lambda x: threading.Thread(
target=self.rule_on_click, args=(x.secondary_text, x), daemon=True).start()
)
self.check = requests.get(url=self.rule, auth=(
self.key, self.secret), verify=False)
if self.check.status_code == 200:
check_rule = json.loads(self.check.text)
if check_rule['rule']['enabled'] == '1':
rules.add_widget(IconLeftWidget(
icon='checkbox-marked-circle-outline'
))
else:
rules.add_widget(IconLeftWidget(
icon='checkbox-blank-circle-outline'
))
self.root.ids.ruleList.add_widget(rules)
I solved this by using the following in the function that had the logic for assigning the correct icon.
x.children[0].children[0].icon = new_icon

Python - Index Error - list index out of range

I am parsing data from a website but getting the error "IndexError: list index out of range". But, at the time of debugging I got all the values. Previously, it worked completely fine, but suddenly can't understand why I am getting this error.
str2 = cols[1].text.strip()
IndexError: list index out of range
Here is my code.
import requests
import DivisionModel
from bs4 import BeautifulSoup
from time import sleep
class DivisionParser:
def __init__(self, zoneName, zoneUrl):
self.zoneName = zoneName
self.zoneUrl = zoneUrl
def getDivision(self):
response = requests.get(self.zoneUrl)
soup = BeautifulSoup(response.content, 'html5lib')
table = soup.findAll('table', id='mytable')
rows = table[0].findAll('tr')
division = []
for row in rows:
if row.text.find('T No.') == -1:
cols = row.findAll('td')
str1 = cols[0].text.strip()
str2 = cols[1].text.strip()
str3 = cols[2].text.strip()
strurl = cols[2].findAll('a')[0].get('href')
str4 = cols[3].text.strip()
str5 = cols[4].text.strip()
str6 = cols[5].text.strip()
str7 = cols[6].text.strip()
divisionModel = DivisionModel.DivisionModel(self.zoneName, str2, str3, strurl, str4, str5, str6, str7)
division.append(divisionModel)
return division
These are the values at the time of debugging:
str1 = {str} '1'
str2 = {str} 'BHUSAWAL DIVN-ENGINEERING'
str3 = {str} 'DRMWBSL692019t1'
str4 = {str} 'Bhusawal Division - TRR/P- 44.898Tkms & 2.225Tkms on 9 Bridges total 47.123Tkms on ADEN MMR &'
str5 = {str} 'Open'
str6 = {str} '23/12/2019 15:00'
str7 = {str} '5'
strurl = {str} '/works/pdfdocs/122019/51822293/viewNitPdf_3021149.pdf'
when i am parsing data from website by checking T No. in a row and get all the values in td, website developer put "No Result" in some td row, so that's why at the run time my loop will not able to get values and throw "list index out of range error."
Well thanks to all for the help.
class DivisionParser:
def __init__(self, zoneName, zoneUrl):
self.zoneName = zoneName
self.zoneUrl = zoneUrl
def getDivision(self):
global rows
try:
response = requests.get(self.zoneUrl)
soup = BeautifulSoup(response.content, 'html5lib')
table = soup.findAll('table', id='mytable')
rows = table[0].findAll('tr')
except IndexError:
sleep(2)
division = []
for row in rows:
if row.text.find('T No.') == -1:
try:
cols = row.findAll('td')
str1 = cols[0].text.strip()
str2 = cols[1].text.strip()
str3 = cols[2].text.strip()
strurl = cols[2].findAll('a')[0].get('href')
str4 = cols[3].text.strip()
str5 = cols[4].text.strip()
str6 = cols[5].text.strip()
str7 = cols[6].text.strip()
divisionModel = DivisionModel.DivisionModel(self.zoneName, str2, str3, strurl, str4, str5, str6,
str7)
division.append(divisionModel)
except IndexError:
print("No Result")
return division
As a general rule, whatever comes from the cold and hostile outside world is totally unreliable. Here:
response = requests.get(self.zoneUrl)
soup = BeautifulSoup(response.content, 'html5lib')
you seem to suffer from the terrible delusion that the response will always be what you expect. Hint: it wont. It is guaranteed that sometimes the response will be something different - could be that the site is down, or decided to blacklist your IP because they don't like having you scraping their data, or whatever.
IOW, you really want to check the response's status code, AND the response content. Actually, you want to be prepared to just anything - FWIW, since you don't specify a timeout, your code could just stay frozen forever waiting for a response
so actually what you want here is along the line of
try:
response = requests.get(yoururl, timeout=some_appropriate_value)
# cf requests doc
response.raise_for_status()
# cf requests doc
except requests.exceptions.RequestException as e
# nothing else you can do here - depending on
# the context (script ? library code ?),
# you either want to re-raise the exception
# raise your own exception or well, just
# show the error message and exit.
# Only you can decide what's the appropriate course
print("couldn't fetch {}: {}".format(yoururl, e))
return
if not response.headers['content-type'].startswith("text/html"):
# idem - not what you expected, and you can't do much
# except mentionning the fact to the caller one way
# or another. Here I just print the error and return
# but if this is library code you want to raise an exception
# instead
print("{} returned non text/html content {}".format(yoururl, response.headers['content-type']))
print("response content:\n\n{}\n".format(response.text))
return
# etc...
request has some rather exhaustive doc, I suggest you read more than the quickstart to learn and use it properly. And that's only half the job - even if you do get a 200 response with no redirections and the right content type, it doesn't mean the markup is what you expect, so here again you have to double-check what you get from BeautifulSoup - for example here:
table = soup.findAll('table', id='mytable')
rows = table[0].findAll('tr')
There's absolutely no garantee that the markup contains any table with a matching id (nor any table at all FWIW), so you have to either check beforehand or handle exceptions:
table = soup.findAll('table', id='mytable')
if not tables:
# oops, no matching tables ?
print("no table 'mytable' found in markup")
print("markup:\n{}\n".format(response.text))
return
rows = table[0].findAll('tr')
# idem, the table might be empty, etc etc
One of the fun things with programming is that handling the nominal case is often rather straightforward - but then you have to handle all the possible corner cases, and this usually requires as much or more code than the nominal case ;-)

trying to get a wordlist from a textfile

Im testing new things and I landed on python... As first code i found the classic hangman game.
I followed a tutorial that shows how to do it step by step but when it comes to wordlist I've got lost...
So far I've been retrieving the words from a variable "_WORDS" which it has some strings as properties. I have now a "words.txt" file from which I want to get the words instead of the variable _WORDS = ("word1", "word2", "etc"). I found some examples but I cant hack it... some inspiration? This is my code...
_WORDS = open("words.txt", 'r')
_IFYOUWIN = ("you won!")
def __init__(self):
self._word = choice(self._WORDS)
self._so_far = "-" * len(self._word)
self._used = []
self._wrong_answers = 0
def play(self):
self._reset_game()
self._start_game()
while self._wrong_answers < len(self._HANGMAN) and self._so_far != self._word:
self._print_current_progress()
guess = self._user_guess()
self._check_answer(guess)
I tried importing "os" like this:
import os
_WORDS = os.path.expanduser("~/Desktop/pythontlearn/words.txt")
but no luck.
for the record I found this working for me
path = '/words.txt'
_WORDS = open(path,'r')
but Im still not sure why it does not work with something like
_WORDS = open("words.txt", 'r')
they look exactley the same to me

Unable to copy list to set. 'float' object is not iterable

lst,score_set,final_lst = [],[],[]
if __name__ == '__main__':
for _ in range(int(input())):
name = input()
score = float(input())
score_set.append(score)
lst.append(([name,score]))
new_set = set()
for i in range(0,len(score_set)):
item = score_set[i]
print (item)
new_set.update(item)
I am trying to copy a list into a set to remove duplicates. In my code, if I remove the last line, the code runs fine. Could you guys please help ?
If you want to add a single value, use add() instead of update():
new_set.add(item)

why my result is overriding in python

def geteta(self,b=0,etatype="schedule"):
eta_3_cuurtime = self.currtime + timedelta(hours = b)
[self.a["legdata"].get(str(i)).append({'depdetails':self.func(self.a["legdata"].get(str(i))[2],eta_3_cuurtime,etatype,self.thceta)}) if i==1 else self.a["legdata"].get(str(i)).append({'depdetails':self.func(self.a["legdata"].get(str(i))[2],self.a['legdata'].get(str(i-1))[3]['depdetails'][2])}) for i in range(1,len(self.a['legdata'])+1)]
eta1 = self.a["legdata"].get(str(len(self.a["legdata"])))[3]['depdetails'][2] #13. eta is last items depdetails [2] (available)
return eta1
def main(self):
eta3 = self.geteta()
eta4 = self.geteta(b=3)
when i am calling geteta method with different inputs my result (eta3,eta4) are same values.when i run method indiviual (comment first geteta) it is giving correct values.i know some where values are overriding but i am not able to figure out where it is.i hv struggling from couple of days for finding this but not.please help me in this

Resources