exchangelib.errors.ErrorNonExistentMailbox: when trying to access exchange with Python - python-3.x

I am pretty new to python and I am trying to do the following:
Use a python script to list the last 10 emails in a specific folder on exchange, however I keep getting the the following error
exchangelib.errors.ErrorNonExistentMailbox: No mailbox with such guid
Below is the script I am trying to run:
#!/usr/bin/env python
#coding:utf-8
from datetime import timedelta
from exchangelib import DELEGATE, IMPERSONATION, Account, Credentials, EWSDateTime, EWSTimeZone, Configuration, CalendarItem, Message, Mailbox, Attendee, Q, ExtendedProperty, FileAttachment, ItemAttachment, HTMLBody, Build, Version
from exchangelib import Configuration, GSSAPI, SSPI
from exchangelib.util import PrettyXmlHandler
from exchangelib.protocol import BaseProtocol, NoVerifyHTTPAdapter
import logging
import requests
logging.basicConfig(level=logging.DEBUG, handlers=[PrettyXmlHandler()])
def connect_exchange(account):
exCredentials = Credentials(username='User#Domainname.com', password='**********')
exConfig = Configuration(server='mail.domainname.com', credentials=exCredentials)
account = Account(primary_smtp_address='User#Domainname.com', credentials=exCredentials, config=exConfig, autodiscover=False, access_type=DELEGATE)
BaseProtocol.HTTP_ADAPTER_CLS = NoVerifyHTTPAdapter
# Print first 100 inbox messages in reverse order
for item in account.inbox.all().order_by('-datetime_received')[:10]:
print(item.subject, item.body, item.attachments)
connect_exchange()
I can see that I am able to connect to the mail server, however when the script attempts the for loop appears to be when the error above gets thrown.
Has anyone encountered such an error before? If so so ids there any workarounds?
Thanks

This is the server telling you that it doesn't know a mailbox with that name. Maybe you're connecting to the wrong server, or you misspelled the email address, or the email address is an alias for the real mailbox?

I tested your snippet on my machine, and with minimal modifications it working for me just fine. Please check your set up credentials for any typos in them.
BaseProtocol.HTTP_ADAPTER_CLS = NoVerifyHTTPAdapter
logging.basicConfig(level=logging.DEBUG, handlers=[PrettyXmlHandler()])
def connect_exchange():
exCredentials = Credentials(username='admin#testexchange.local', password='Password')
exConfig = Configuration(server='testexchange.local', credentials=exCredentials)
account = Account(primary_smtp_address='user#testexchange.local', credentials=exCredentials, config=exConfig, autodiscover=False, access_type=DELEGATE)
# Print first 100 inbox messages in reverse order
for item in account.inbox.all().order_by('-datetime_received')[:10]:
print(item.subject, item.body, item.attachments)
connect_exchange()

Related

Downloading a CSV that requires authentication from an email link

I'm a very junior developer, tasked with automating the creation, download and transformation of a query from Stripe Sigma.
I've been able to get the bulk of my job done: I have daily scheduled queries that generate a report for the prior 24 hours, which is linked to a dummy account purely for those reports, and I've got the Transformation and reports done on the back half of this problem.
The roadblock I've run into though is getting this code to pull the csv that manually clicking the link generates.
import re
from imbox import Imbox # pip install imbox
import traceback
import requests
from bs4 import BeautifulSoup
mail = Imbox(host, username=username, password=password, ssl=True, ssl_context=None, starttls=False)
messages = mail.messages(unread=True)
message_list = []
for (uid, message) in messages:
body = str(message.body.get('html'))
message_list.append(body)
mail.logout()
def get_download_link(message):
print(message[0])
soup = BeautifulSoup(message, 'html.parser')
urls = []
for link in soup.find_all('a'):
print(link.get('href'))
urls.append(link.get('href'))
return urls[1]
# return urls
dl_urls = []
for m in message_list:
dl_urls.append(get_download_link(m))
for url in dl_urls:
print(url)
try:
s = requests.Session()
s.auth = (username, password)
response = s.get(url, allow_redirects=True, auth= (username, password))
# print(response.headers)
if (response.status_code == requests.codes.ok):
print('response headers', response.headers['content-type'])
response = requests.get(url, allow_redirects=True, auth= HTTPDigestAuth(username, password))
# print(response.text)
print(response.content)
# open(filename, 'wb').write(response.content)
else:
print("invalid status code",response.status_code)
except:
print('problem with url', url)
I'm working on this in jupyter notebooks, I've tried to just include relevant code detailing how I got into the email, how I extracted URLS from said email, and which one upon being clicked would download the csv.
All the way til the last step, I've had remarkably good luck, but now, the URL that I manually click downloads the csv as expected, however that same URL is being treated as the HTML for a stripe page by python/requests.
I've tried poking around in the headers, the one header that was suggested on another post ('Content-Disposition') wasn't present, and the printing the headers that are present takes up a good 20-25 lines.
Any suggestions on either headers that could contain the csv, or other approaches I would take would be appreciated.
I've included a (intentionally broken) URL to show the rough format of what is working for manual download, not working when kept in python entirely.
https://59.email.stripe.com/CL0/https:%2F%2Fdashboard.stripe.com%2Fscheduled_query_runs%xxxxxxxxxxxxxxxx%2Fdownload/1/xxxxxxxxxxxx-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-000000/xxxxxxxxx-xxxxx_xxxxxxxxxxxxxxxx=233
If you're using scheduled queries, you can receive notification about completion/availability as webhooks and then access the files programmatically using the url included in the event payload. You can also list/retrieve scheduled queries via the API and check their status before accessing the data at the file link.
There should be no need to parse these links out of an email.
You can also set up a Stripe webhook for the Stripe Sigma query run, pointing that webhook notification to a Zap webhook trigger. Then:
Catch the hook in Zapier
Filter by the Sigma query name
Use another Zapier webhook custom request to authenticate and grab the data object file URL
Use the utilities formatter by Zapier to transform the results into a CSV
file

Boto3 client in multiprocessing pool fails with "botocore.exceptions.NoCredentialsError: Unable to locate credentials"

I'm using boto3 to connect to s3, download objects and do some processing. I'm using a multiprocessing pool to do the above.
Following is a synopsis of the code I'm using:
session = None
def set_global_session():
global session
if not session:
session = boto3.Session(region_name='us-east-1')
def function_to_be_sent_to_mp_pool():
s3 = session.client('s3', region_name='us-east-1')
list_of_b_n_o = list_of_buckets_and_objects
for bucket, object in list_of_b_n_o:
content = s3.get_object(Bucket=bucket, Key=key)
data = json.loads(content['Body'].read().decode('utf-8'))
write_processed_data_to_a_location()
def main():
pool = mp.Pool(initializer=set_global_session, processes=40)
pool.starmap(function_to_be_sent_to_mp_pool, list_of_b_n_o_i)
Now, when processes=40, everything works good. When processes = 64, still good.
However, when I increases to processes=128, I get the following error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Our machine has the required IAM roles for accessing S3. Moreover, the weird thing that happens is that for some processes, it works fine, whereas for some others, it throws the credentials error. Why is this happening, and how to resolve this?
Another weird thing that happens is that I'm able to trigger two jobs in 2 separate terminal tabs (each of which has a separate ssh login shell to the machine). Each job spawns 64 processes, and that works fine as well, which means there are 128 processes running simultaneously. But 80 processes in one login shell fails.
Follow up:
I tried creating separate sessions for separate processes in one approach. In the other, I directly created s3-client using boto3.client. However, both of them throw the same error with 80 processes.
I also created separate clients with the following extra config:
Config(retries=dict(max_attempts=40), max_pool_connections=800)
This allowed me to use 80 processes at once, but anything > 80 fails with the same error.
Post follow up:
Can someone confirm if they've been able to use boto3 in multiprocessing with 128 processes?
This is actually a race condition on fetching the credentials. I'm not sure how fetching credentials under the hood works, but the I saw this question in Stack Overflow and this ticket in github.
I was able to resolve this by keeping a random wait time for each of the processes. The following is the updated code which works for me:
client_config = Config(retries=dict(max_attempts=400), max_pool_connections=800)
time.sleep(random.randint(0, num_processes*10)/1000) # random sleep time in milliseconds
s3 = boto3.client('s3', region_name='us-east-1', config=client_config)
I tried keeping the range for sleep time lesser than num_processes*10, but that failed again with the same issue.
#DenisDmitriev, since you are getting the credentials and storing them explicitly, I think that resolves the race condition and hence the issue is resolved.
PS: values for max_attempts and max_pool_connections don't have a logic. I was plugging several values until the race condition was figured out.
I suspect that AWS recently reduced throttling limits for metadata requests because I suddenly started running into the same issue. The solution that appears to work is to query credentials once before creating the pool and have the processes in the pool use them explicitly instead of making them query credentials again.
I am using fsspec with s3fs, and here's what my code for this looks like:
def get_aws_credentials():
'''
Retrieve current AWS credentials.
'''
import asyncio, s3fs
fs = s3fs.S3FileSystem()
# Try getting credentials
num_attempts = 5
for attempt in range(num_attempts):
credentials = asyncio.run(fs.session.get_credentials())
if credentials is not None:
if attempt > 0:
log.info('received credentials on attempt %s', 1 + attempt)
return asyncio.run(credentials.get_frozen_credentials())
time.sleep(15 * (random.random() + 0.5))
raise RuntimeError('failed to request AWS credentials '
'after %d attempts' % num_attempts)
def process_parallel(fn_d, max_processes):
# [...]
c = get_aws_credentials()
# Cache credentials
import fsspec.config
prev_s3_cfg = fsspec.config.conf.get('s3', {})
try:
fsspec.config.conf['s3'] = dict(prev_s3_cfg,
key=c.access_key,
secret=c.secret_key)
num_processes = min(len(fn_d), max_processes)
from concurrent.futures import ProcessPoolExecutor
with ProcessPoolExecutor(max_workers=num_processes) as pool:
for data in pool.map(process_file, fn_d, chunksize=10):
yield data
finally:
fsspec.config.conf['s3'] = prev_s3_cfg
Raw boto3 code will look essentially the same, except instead of the whole fs.session and asyncio.run() song and dance, you'll work with boto3.Session itself and call its get_credentials() and get_frozen_credentials() methods directly.
I get the same problem with multi process situation. I guess there is a client init problem when you use multi process. So I suggest that you can use get function to get s3 client. It works for me.
g_s3_cli = None
def get_s3_client(refresh=False):
global g_s3_cli
if not g_s3_cli or refresh:
g_s3_cli = boto3.client('s3')
return g_s3_cli

How can I get and store a user's message with python-telegram-bot?

I'm writing a Telegram bot that must send an image to a user, get a reply and store it as a string.
I managed to write the script to start the bot and send the image (very basic, I know), but I can't figure out how to get the answer. Here is a MWE of my script:
import telegram.ext
token = '1234567890:ABCdEffGH-IJKlm1n23oPqrst_UVzAbcdE4'
bot=telegram.Bot(token=token)
user_id = 567890123
image_path='./image.png'
with open(image_path,'rb') as my_image:
bot.send_photo(chat_id=user_id,photo=my_image,caption='What is this?')
% Somehow store the answer
% Do some other stuff
How can I get and store user's answer?
I can change the script as needed, but to interact with Telegram's API I must use python-telegram-bot only.
You can make a flag called waiting_for_response and after sending the image make it True.
after that you should add a handler to the bot to recieve users reply. You can do it this way:
from telegram.ext import Updater, CommandHandler
updater = Updater(token=TOKEN, use_context=True)
dispatcher = updater.dispatcher
handler = RegexHandler(r'.+', function_name)
dispatcher.add_handler(handler)
Note: You should replace TOKEN with your own token.
by adding this lines, when user sends a message it calls the function_name function and you can define it like this:
def function_name(update, context):
global waiting_for_response
if waiting_for_response:
pass
You can add whatever you want to do with that message inside of this function using update and context.

How to mock a sendgrid web api v.3 method in python

I'm trying to test that a SendGrid method was called without sending an email. When I run my test, the method is not patched and instead runs the original method which sends an email. I'm not sure why my patch is not working. This question is similar to How to mock a SendGrid method in Python but using a different version of SendGrid.
# api/Login/utils_test.py
from .utils import send_email
from unittest.mock import patch
#patch('api.Login.utils.sg.client.mail.send.post')
def test_send_email(mock_mail_send):
send_email(email, subject, html)
assert mock_mail_send.called
# api/Login/utils.py
from api import sg
def send_email(email, subject, html):
msg = create_email(email, subject, html)
request_body = msg.get()
response = sg.client.mail.send.post(request_body=request_body)
# api/__init__.py
from server import sg
# server.py
import sendgrid
import os
sg = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
Currently when I run pytest Login/utils_test.py from inside the api directory, I get an AssertionError:
assert False
+ where False = <MagicMock name='post' id='4370000808'>.called
I expect the test to pass with no output.
Found a workaround from the sendgrid-python repo issues https://github.com/sendgrid/sendgrid-python/issues/293
Going to wrap the call because patch doesn't seem to be working with SendGrid web api v.3 and it doesn't look like they are going to support it.
Updating to the following:
# api/utils.py
def send_email(email, subject, html):
msg = create_email(email, subject, html)
request_body = msg.get()
response = _send_email(request_body)
print(response.status_code)
print(response.body)
print(response.headers)
def _send_email(request_body):
"""Wrapping the SendGrid mail sending method in order to patch it
while testing. https://github.com/sendgrid/sendgrid-
python/issues/293"""
response = sg.client.mail.send.post(request_body=request_body)
return response
# api/utils_test.py
#patch('api.Login.utils._send_email')
def test_send_email(mock_mail_send):
send_email(email, subject, html)
assert mock_mail_send.called
Not on the Python side, but on the SendGrid side, have you tried using sandbox mode?
Another option is to use Prism in conjunction with our Open API definition. This will create a local mocked version of the SendGrid API so you can test against any of our endpoints.
Then you can run prism run --mock --list --spec https://raw.githubusercontent.com/sendgrid/sendgrid-oai/master/oai_stoplight.json from your command line.
To have Prism auto-start, please see this example.

Add Django Variables in Email Template

I have a relatively simple objective: send email to Django admins when users register and activate their accounts that contain user information, like username, email, etc. I am using django-registration for handling the registration process. I then employ signals to send the emails. This process works just fine, but as soon as I try to inject the user, username or user's email into an email template I get in trouble.
Here is my signal_connectors.py:
from django.core.mail import mail_admins
from django.dispatch import receiver
from registration.signals import user_registered
from django.template.loader import render_to_string
#receiver(user_registered, dispatch_uid="common.signal_connectors.user_registered_handler")
def user_registered_handler(sender, **kwargs):
print('user registration completed...')
subject = 'registration notify'
template = 'common/email.html'
message = render_to_string(template, {'user': user})
from_email = 'from#example.com'
# send email to admins, if user has been registered
mail_admins(subject, message, fail_silently=False, connection=None, html_message=None)
print('sending email to administrators...')
The template is like so:
{{ user }} has registered at example.com
I have tried variations like {{user}}, {{user.username}}, {{request.user.username}}, etc. Also, in the message variable above, in signals_connectors.py, I have tried variations, with corresponding import statements.
I have also tried rendering templates with contexts set, and other variations that I have found on the web, eg:
finding the django-registration username variable
I keep getting something like this for an error:
global name 'user' is not defined
I think I have template context processing set up correctly, because I can use something like {{ user.username }} in another template and it works.
I have also tried correcting my import statements to include necessary objects, with no luck. I have tried most of what I can find on the web on this topic so I'm at a loss...
What am I missing?
I didn't realize this was so easy. Based on this post:
Where should signal handlers live in a django project?
I tried actually adding the user and request as arguments in the signal connector method and this worked:
. . .
#receiver(user_registered, dispatch_uid="common.signal_connectors.user_registered_handler")
def user_registered_handler(sender, user, request, **kwargs):
. . .
Then, I used this in email template:
{{ user }} has registered at example.com
I probably over-thought this one but I am surprised no one stepped up and was able to help with this...

Resources