I am trying to verify email addresses using this approach (see below) that is cited and recommended in one form or another in countless different places. The problem is that when I get to the "server.connect(mxRecord)" part, it stalls & times out 100% of the time.
I have tried:
at least 30 different domain names to make sure this isn't some networking issue that's specific to a certain domain
adjusting the timeout limit
explicitly specifying the port to connect over (465 & 587)
running "smtplib.SMTP_SSL()" instead of "smtplib.SMTP()"
My setup:
Home wifi
No proxies
import dns.resolver
import smtplib
import socket
addressToVerify = 'rickymartin#yahoo.com'
domainToVerify = 'yahoo.com'
records = dns.resolver.query(domainToVerify, 'MX')
mxRecord = records[0].exchange
mxRecord = str(mxRecord)
print(mxRecord)
server = smtplib.SMTP(timeout=20)
server.set_debuglevel(0)
server.connect(mxRecord)
server.helo(server.local_hostname)
server.mail('me#domain.com')
code, message = server.rcpt(str(addressToVerify))
server.quit()
if code == 250:
print('Valid')
else:
print('Invalid')
This is the error that I receive:
Traceback (most recent call last):
File "email-validator.py", line 17, in <module>
server.connect(mxRecord)
File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtplib.py", line 336, in connect
self.sock = self._get_socket(host, port, self.timeout)
File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/smtplib.py", line 307, in _get_socket
self.source_address)
File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 727, in create_connection
raise err
File "/usr/local/Cellar/python/3.7.1/Frameworks/Python.framework/Versions/3.7/lib/python3.7/socket.py", line 716, in create_connection
sock.connect(sa)
socket.timeout: timed out
You need an SMTP server, to get this working.
From the SMTP documentation
smtplib.SMTP(host='', port=0, local_hostname=None, [timeout,]source_address=None)
It takes a 2-tuple (host, port), for the socket to bind to as its source address before connecting. If omitted (or if host or port are '' and/or 0 respectively) the OS default behavior will be used.
You get the timeout because the connection can't be establised.
Try to find and use some free SMTP servers or use a docker image and set it up locally.
To unblock your execution from timeout, use system timeout exception for this.
except TimeoutError as t:
return
Related
I'm creating a socket connection using this code
def create_broadcast_listener_socket(broadcast_ip, broadcast_port):
b_sock = socket.socket(socket.AF_INET, socket.SOCK_DGRAM, socket.IPPROTO_UDP)
b_sock.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
b_sock.bind(('', broadcast_port))
mreq = struct.pack("4sl", socket.inet_aton(broadcast_ip), socket.INADDR_ANY)
b_sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
return b_sock
I'm not going to claim to know a lot about that, but mostly understand it.... but what is getting me is the broadcast_ip in the socket.inet_aton() call. I would expect it to be an IP address that is relevant to me, specifically filtering UPD messages from only the IP address specified. I put in the ip address of the device I expect the messages to come from and I get an error:
File "./weatherListener.py", line 38, in <module>
sock_list = [create_broadcast_listener_socket(BROADCAST_IP, BROADCAST_PORT)]
File "./weatherListener.py", line 27, in create_broadcast_listener_socket
b_sock.setsockopt(socket.IPPROTO_IP, socket.IP_ADD_MEMBERSHIP, mreq)
I put in the IP that was listed in the code sample I used, 239.255.255.254 and it works fine. I can change the values quite a bit and it works just fine. It doesn't seem to matter what value I use as long as it's within a certain range. So what is the point, what am I missing with this?
Any advice would be appreciated
Thank you.
I'm using a Python GATT client software (Bleak) over an RPI, in order to scan, connect and read/write values from BLE devices, to collect the status of them.
The actual problem comes when it reaches the "write to GATT characteristic" time, which eventually ends in "Write not permitted" error.
I use the following scanning method:
device = await BleakScanner.find_device_by_address(mac_addr, timeout=5.0)
Once the device is found, I access to it and do the following:
async with BleakClient(address) as client:
print(f"Connected: {client.is_connected}")
# Subscribe to desired GATT characteristic UUID
await client.start_notify(CHAR_UUID_TO_READ, notification_handler)
await asyncio.sleep(2.0)
...
# Write one byte to a different GATT characteristic UUID
# which will alter the value displayed on the CHAR_UUID_TO_READ
await client.write_gatt_char(CHAR_UUID_TO_WRITE, b'\x01')
await asyncio.sleep(1.0)
...
# Once found the specified end-of-text pattern, it shall disconnect
# from the device
await client.stop_notify(char_uuid)
By proceeding as described, I get the following error:
[org.bluez.Error.NotPermitted] Write not permitted
Traceback (most recent call last):
File "scan_and_connect.py", line 462, in main
await client.start_notify(CHAR_UUID_TO_READ, notification_handler)
File "/usr/local/lib/python3.7/dist-packages/bleak/backends/bluezdbus/client.py", line 931, in start_notify
assert_reply(reply)
File "/usr/local/lib/python3.7/dist-packages/bleak/backends/bluezdbus/utils.py", line 23, in assert_reply
raise BleakDBusError(reply.error_name, reply.body)
bleak.exc.BleakDBusError: [org.bluez.Error.NotPermitted] Write not permitted
Any help on this topic would be appreciated.
Thanks in advance.
am getting below exception while trying to use multiprocessing with flask sqlalchemy.
sqlalchemy.exc.ResourceClosedError: This result object does not return rows. It has been closed automatically.
[12/Aug/2019 18:09:52] "GET /api/resources HTTP/1.1" 500 -
Traceback (most recent call last):
File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/base.py", line 1244, in _execute_context
cursor, statement, parameters, context
File "/usr/local/lib/python3.7/site-packages/SQLAlchemy-1.3.6-py3.7-linux-x86_64.egg/sqlalchemy/engine/default.py", line 552, in do_execute
cursor.execute(statement, parameters)
psycopg2.DatabaseError: error with status PGRES_TUPLES_OK and no message from the libpq
Without multiprocessing the code works perfect, but when i add the multiprocessing as below, am running into this issue.
worker = multiprocessing.Process(target=<target_method_which_has_business_logic_with_DB>, args=(data,), name='PROCESS_ID', daemon=False)
worker.start()
return Response("Request Accepted", status=202)
I see an answer to similar question in SO (https://stackoverflow.com/a/33331954/8085047), which suggests to use engine.dispose(), but in my case am using db.session directly, not creating the engine and scope manually.
Please help to resolve the issue. Thanks!
I had the same issue. Following Sam's link helped me solve it.
Before I had (not working):
from multiprocessing import Pool
with Pool() as pool:
pool.map(f, [arg1, arg2, ...])
This works for me:
from multiprocessing import get_context
with get_context("spawn").Pool() as pool:
pool.map(f, [arg1, arg2, ...])
The answer from dibrovsd#github was really useful for me. If you are using a PREFORKING server like uwsgi or gunicorn, this would also help you.
Post his comment here for your reference.
Found. This happens when uwsgi (or gunicorn) starts when multiple workers are forked from the first process.
If there is a request in the first process when it starts, then this opens a database connection and the connection is forked to the next process. But in the database, of course, no new connection is opened and a broken connection occurs.
You had to specify lazy: true, lazy-apps: true (uwsgi) or preload_app = False (gunicorn)
In this case, add. workers do not fork, but run themselves and open their normal connections themselves
Refer to link: https://github.com/psycopg/psycopg2/issues/281#issuecomment-985387977
Im using Python 3.6 and PRAW 6, trying to do a simple bot with subreddit filters that cross posts hot submissions into another subreddit . However, I cant seem to set up my subreddit filter properly when initiating the script.
This is pretty annoying because it has worked before. I read that 403 HTTP response was authentication issues but that doesnt make sense. I can individually add subreddits into the filter and I even managed to iteratively remove subreddits from the saved subreddit filter list which I had set up beforehand.
I have a sub_filter.txt file with the list of subreddits I would like to filter out containing strings like so:
tifu
jokes
worldnews
Then,
with open("sub_filter.txt") as q:
subreddit_filter = subreddit_filter.split("\n")
subreddit_filter = list(filter(None, subreddit_filter))
subreddit_filter = list(subreddit_filter)
for i in subreddit_filter:
filter_list = reddit.subreddit('all').filters.add(i)
for subreddit in reddit.subreddit('all').filters:
print(subreddit)
This is the error message I get when it reaches the code to iteratively add subreddits into the subreddit filter
for i in subreddit_filter:
filter_list = reddit.subreddit('all').filters.add(i)
Traceback (most recent call last):
File "C:\Users\Qixuan\Desktop\Programming programmes\Reddit\weweet-bot\weweet-code.py", line 23, in <module>
reddit.subreddit('all').filters.add(i)
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\praw\models\reddit\subreddit.py", line 974, in add
"PUT", url, data={"model": dumps({"name": str(subreddit)})}
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\praw\reddit.py", line 577, in request
method, path, data=data, files=files, params=params
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\prawcore\sessions.py", line 185, in request
params=params, url=url)
File "C:\Users\Qixuan\AppData\Local\Programs\Python\Python36\lib\site-packages\prawcore\sessions.py", line 130, in _request_with_retries
raise self.STATUS_EXCEPTIONS[response.status_code](response)
prawcore.exceptions.Forbidden: received 403 HTTP response
Any help is greatly appreaciated! Im also not very proficient at coding so please be forgiving!
Welcome to StackOverflow!
I read that 403 HTTP response was authentication issues
403 does indeed indicate an authentication issue. Try adding the following line immediately after reddit is defined:
print(reddit.user.me())
If you are properly authenticated, this will print your username. Otherwise, you need to fix your credentials (username, password, client ID, client secret, user agent).
I have a subscription to a topic using filters in Azure Service Bus develop with Python 3.x and when I wait the information sent to that topic (information that pass the filter) I can not receive it.
I need to create a daemon that is always listening and that when I receive that information I send it to an internal service of my application, so the receiver is running in a thread inside a loop While True
The code I use to receive the messages is as follows:
while True:
msg = bus_service.receive_subscription_message(topic_name, subscription_name, peek_lock=True)
print('Mensaje Recibido-->',msg.body)
data = msg.body
send_MyApp(data.decode("utf-8"))
msg.delete()
What I get when I run it is the next information:
Message --> None
Exception in thread Thread-1:
Traceback (most recent call last):
File "..\AppData\Local\Programs\Python\Python36-32\lib\threading.py", line 916, in _bootstrap_inner
self.run()
File "..\AppData\Local\Programs\Python\Python36-32\lib\threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "../Python/ServiceBusSuscription/receive_subscription.py", line 19, in receive_subscription
send_MyApp(data.decode("utf-8"))
AttributeError: 'NoneType' object has no attribute 'decode'
If I run the receiver out of the thread, this is the error message it shows (again when the timeout is skipped, which timeout I should delete because in a daemon that is waiting it can not skip). Basically, it is the same error:
Traceback (most recent call last):
File "../Python/ServiceBusSuscription/receive_subscription.py", line 76, in <module>
main()
File "../Python/ServiceBusSuscription/receive_subscription.py", line 72, in main
demo(bus_service)
File "../Python/ServiceBusSuscription//receive_subscription.py", line 25, in demo
print(msg.body.decode("utf-8"))
AttributeError: 'NoneType' object has no attribute 'decode'
I do not receive the information I'm waiting and also skip a Service Bus timeout (which I have not programmed).
Can anybody help me? Microsoft's documentation does not help much, really.
Thanks in advance
UPDATE
I think that the problem is from Azure Service Bus and the subscriptions and filters. Actually, I have 23 filters and I think Azure Service Bus only works with 1 subscription :( But I'm not sure about this point.
I tried to reproduce your issue successfully, then I discovered that it happended if there is no message in your topic.
So you need to check the value or type of msg.body whether be None or type(None) before decode the bytes of msg.body, as below.
data = msg.body
if data != None:
# Or if type(data) == type(b''):
send_MyApp(data.decode("utf-8"))
msg.delete()
else:
...
Hope it helps.