I'm trying to test that a SendGrid method was called without sending an email. When I run my test, the method is not patched and instead runs the original method which sends an email. I'm not sure why my patch is not working. This question is similar to How to mock a SendGrid method in Python but using a different version of SendGrid.
# api/Login/utils_test.py
from .utils import send_email
from unittest.mock import patch
#patch('api.Login.utils.sg.client.mail.send.post')
def test_send_email(mock_mail_send):
send_email(email, subject, html)
assert mock_mail_send.called
# api/Login/utils.py
from api import sg
def send_email(email, subject, html):
msg = create_email(email, subject, html)
request_body = msg.get()
response = sg.client.mail.send.post(request_body=request_body)
# api/__init__.py
from server import sg
# server.py
import sendgrid
import os
sg = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
Currently when I run pytest Login/utils_test.py from inside the api directory, I get an AssertionError:
assert False
+ where False = <MagicMock name='post' id='4370000808'>.called
I expect the test to pass with no output.
Found a workaround from the sendgrid-python repo issues https://github.com/sendgrid/sendgrid-python/issues/293
Going to wrap the call because patch doesn't seem to be working with SendGrid web api v.3 and it doesn't look like they are going to support it.
Updating to the following:
# api/utils.py
def send_email(email, subject, html):
msg = create_email(email, subject, html)
request_body = msg.get()
response = _send_email(request_body)
print(response.status_code)
print(response.body)
print(response.headers)
def _send_email(request_body):
"""Wrapping the SendGrid mail sending method in order to patch it
while testing. https://github.com/sendgrid/sendgrid-
python/issues/293"""
response = sg.client.mail.send.post(request_body=request_body)
return response
# api/utils_test.py
#patch('api.Login.utils._send_email')
def test_send_email(mock_mail_send):
send_email(email, subject, html)
assert mock_mail_send.called
Not on the Python side, but on the SendGrid side, have you tried using sandbox mode?
Another option is to use Prism in conjunction with our Open API definition. This will create a local mocked version of the SendGrid API so you can test against any of our endpoints.
Then you can run prism run --mock --list --spec https://raw.githubusercontent.com/sendgrid/sendgrid-oai/master/oai_stoplight.json from your command line.
To have Prism auto-start, please see this example.
Related
I'm a very junior developer, tasked with automating the creation, download and transformation of a query from Stripe Sigma.
I've been able to get the bulk of my job done: I have daily scheduled queries that generate a report for the prior 24 hours, which is linked to a dummy account purely for those reports, and I've got the Transformation and reports done on the back half of this problem.
The roadblock I've run into though is getting this code to pull the csv that manually clicking the link generates.
import re
from imbox import Imbox # pip install imbox
import traceback
import requests
from bs4 import BeautifulSoup
mail = Imbox(host, username=username, password=password, ssl=True, ssl_context=None, starttls=False)
messages = mail.messages(unread=True)
message_list = []
for (uid, message) in messages:
body = str(message.body.get('html'))
message_list.append(body)
mail.logout()
def get_download_link(message):
print(message[0])
soup = BeautifulSoup(message, 'html.parser')
urls = []
for link in soup.find_all('a'):
print(link.get('href'))
urls.append(link.get('href'))
return urls[1]
# return urls
dl_urls = []
for m in message_list:
dl_urls.append(get_download_link(m))
for url in dl_urls:
print(url)
try:
s = requests.Session()
s.auth = (username, password)
response = s.get(url, allow_redirects=True, auth= (username, password))
# print(response.headers)
if (response.status_code == requests.codes.ok):
print('response headers', response.headers['content-type'])
response = requests.get(url, allow_redirects=True, auth= HTTPDigestAuth(username, password))
# print(response.text)
print(response.content)
# open(filename, 'wb').write(response.content)
else:
print("invalid status code",response.status_code)
except:
print('problem with url', url)
I'm working on this in jupyter notebooks, I've tried to just include relevant code detailing how I got into the email, how I extracted URLS from said email, and which one upon being clicked would download the csv.
All the way til the last step, I've had remarkably good luck, but now, the URL that I manually click downloads the csv as expected, however that same URL is being treated as the HTML for a stripe page by python/requests.
I've tried poking around in the headers, the one header that was suggested on another post ('Content-Disposition') wasn't present, and the printing the headers that are present takes up a good 20-25 lines.
Any suggestions on either headers that could contain the csv, or other approaches I would take would be appreciated.
I've included a (intentionally broken) URL to show the rough format of what is working for manual download, not working when kept in python entirely.
https://59.email.stripe.com/CL0/https:%2F%2Fdashboard.stripe.com%2Fscheduled_query_runs%xxxxxxxxxxxxxxxx%2Fdownload/1/xxxxxxxxxxxx-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx-000000/xxxxxxxxx-xxxxx_xxxxxxxxxxxxxxxx=233
If you're using scheduled queries, you can receive notification about completion/availability as webhooks and then access the files programmatically using the url included in the event payload. You can also list/retrieve scheduled queries via the API and check their status before accessing the data at the file link.
There should be no need to parse these links out of an email.
You can also set up a Stripe webhook for the Stripe Sigma query run, pointing that webhook notification to a Zap webhook trigger. Then:
Catch the hook in Zapier
Filter by the Sigma query name
Use another Zapier webhook custom request to authenticate and grab the data object file URL
Use the utilities formatter by Zapier to transform the results into a CSV
file
I am pretty new to python and I am trying to do the following:
Use a python script to list the last 10 emails in a specific folder on exchange, however I keep getting the the following error
exchangelib.errors.ErrorNonExistentMailbox: No mailbox with such guid
Below is the script I am trying to run:
#!/usr/bin/env python
#coding:utf-8
from datetime import timedelta
from exchangelib import DELEGATE, IMPERSONATION, Account, Credentials, EWSDateTime, EWSTimeZone, Configuration, CalendarItem, Message, Mailbox, Attendee, Q, ExtendedProperty, FileAttachment, ItemAttachment, HTMLBody, Build, Version
from exchangelib import Configuration, GSSAPI, SSPI
from exchangelib.util import PrettyXmlHandler
from exchangelib.protocol import BaseProtocol, NoVerifyHTTPAdapter
import logging
import requests
logging.basicConfig(level=logging.DEBUG, handlers=[PrettyXmlHandler()])
def connect_exchange(account):
exCredentials = Credentials(username='User#Domainname.com', password='**********')
exConfig = Configuration(server='mail.domainname.com', credentials=exCredentials)
account = Account(primary_smtp_address='User#Domainname.com', credentials=exCredentials, config=exConfig, autodiscover=False, access_type=DELEGATE)
BaseProtocol.HTTP_ADAPTER_CLS = NoVerifyHTTPAdapter
# Print first 100 inbox messages in reverse order
for item in account.inbox.all().order_by('-datetime_received')[:10]:
print(item.subject, item.body, item.attachments)
connect_exchange()
I can see that I am able to connect to the mail server, however when the script attempts the for loop appears to be when the error above gets thrown.
Has anyone encountered such an error before? If so so ids there any workarounds?
Thanks
This is the server telling you that it doesn't know a mailbox with that name. Maybe you're connecting to the wrong server, or you misspelled the email address, or the email address is an alias for the real mailbox?
I tested your snippet on my machine, and with minimal modifications it working for me just fine. Please check your set up credentials for any typos in them.
BaseProtocol.HTTP_ADAPTER_CLS = NoVerifyHTTPAdapter
logging.basicConfig(level=logging.DEBUG, handlers=[PrettyXmlHandler()])
def connect_exchange():
exCredentials = Credentials(username='admin#testexchange.local', password='Password')
exConfig = Configuration(server='testexchange.local', credentials=exCredentials)
account = Account(primary_smtp_address='user#testexchange.local', credentials=exCredentials, config=exConfig, autodiscover=False, access_type=DELEGATE)
# Print first 100 inbox messages in reverse order
for item in account.inbox.all().order_by('-datetime_received')[:10]:
print(item.subject, item.body, item.attachments)
connect_exchange()
I'm writing a Telegram bot that must send an image to a user, get a reply and store it as a string.
I managed to write the script to start the bot and send the image (very basic, I know), but I can't figure out how to get the answer. Here is a MWE of my script:
import telegram.ext
token = '1234567890:ABCdEffGH-IJKlm1n23oPqrst_UVzAbcdE4'
bot=telegram.Bot(token=token)
user_id = 567890123
image_path='./image.png'
with open(image_path,'rb') as my_image:
bot.send_photo(chat_id=user_id,photo=my_image,caption='What is this?')
% Somehow store the answer
% Do some other stuff
How can I get and store user's answer?
I can change the script as needed, but to interact with Telegram's API I must use python-telegram-bot only.
You can make a flag called waiting_for_response and after sending the image make it True.
after that you should add a handler to the bot to recieve users reply. You can do it this way:
from telegram.ext import Updater, CommandHandler
updater = Updater(token=TOKEN, use_context=True)
dispatcher = updater.dispatcher
handler = RegexHandler(r'.+', function_name)
dispatcher.add_handler(handler)
Note: You should replace TOKEN with your own token.
by adding this lines, when user sends a message it calls the function_name function and you can define it like this:
def function_name(update, context):
global waiting_for_response
if waiting_for_response:
pass
You can add whatever you want to do with that message inside of this function using update and context.
I have Angular 8 web app. It needs to send some data for analysis to python flask App. This flask App will send it to 3rd party service and get response through webhook.
My need is to provide a clean interface to the client so that there is no need to provide webhook from client.
Hence I am trying to initiate a request from Flask app, wait until I get data from webhook and then return.
Here is the code.
#In autoscaled micro-service this will not work. Right now, the scaling is manual and set to 1 instance
#Hence keeping this sliding window list in RAM is okay.
reqSlidingWindow =[]
#app.route('/handlerequest',methods = ['POST'])
def Process_client_request_and_respond(param1,param2,param3):
#payload code removed
#CORS header part removed
querystring={APIKey, 'https://mysvc.mycloud.com/postthirdpartyresponsehook'}
response = requests.post(thirdpartyurl, json=payload, headers=headers, params=querystring)
if(response.status_code == SUCCESS):
respdata = response.json()
requestId = respdata["request_id"]
requestobject = {}
requestobject['requestId'] = requestId
reqSlidingWindow.append(requestobject)
#Now wait for the response to arrive through webhook
#Keep checking the list if reqRecord["AllAnalysisDoneFlag"] is set to True.
#If set, read reqRecord["AnalysisResult"] = jsondata
jsondata = None
while jsondata is None:
time.sleep(2)
for reqRecord in reqSlidingWindow:
if(reqRecord["requestId"] == da_analysis_req_Id ):
print("Found matching req id in req list. Checking if analysis is done.")
print(reqRecord)
if(reqRecord["AllAnalysisDoneFlag"] == True):
jsondata = reqRecord["AnalysisResult"]
return jsonify({"AnalysisResult": "Success", "AnalysisData":jsondata})
#app.route('/webhookforresponse',methods = ['POST'])
def process_thirdparty_svc_response(reqIdinResp):
print(request.data)
print("response receieved at")
data = request.data.decode('utf-8')
jsondata = json.loads(data)
#
for reqRecord in reqSlidingWindow:
#print("In loop. current analysis type" + reqRecord["AnalysisType"] )
if(reqRecord["requestId"] == reqIdinResp ):
reqRecord["AllAnalysisDoneFlag"] = True
reqRecord["AnalysisResult"] = jsondata
return
I'm trying to maintain sliding window of requests in list. Upon the
Observations so far:
First, this does not work. The function of 'webhookforresponse' does not seem to run until my request function comes out. i.e. my while() loop appears to block everything though I have a time.sleep(). My assumption is that flask framework would ensure that callback is called since sleep in another 'route' gives it adequate time and internally flask would be multithreaded?
I tried running python threads for the sliding window data structure and also used RLocks. This does not alter behavior. I have not tried flask specific threading.
Questions:
What is the right architecture of the above need with flask? I need clean REST interface without callback for angular client. Everything else can change.
If the above code to be used, what changes should I make? Is threading required at all?
Though I can use database and then pick from there, it still requires polling the sliding window.
Should I use multithreading specific to flask? Is there any specific example with skeletal design, threading library choices?
Please suggest the right way to proceed to achieve abstract REST API for angular client which hides the back end callbacks and other complexities.
Right, hello, so I'm trying to implement opticard (loyality card services) with my webapp using trio and asks (https://asks.readthedocs.io/).
So I'd like to send a request to their inquiry api:
Here goes using requests:
import requests
r = requests.post("https://merchant.opticard.com/public/do_inquiry.asp", data={'company_id':'Dt', 'FirstCardNum1':'foo', 'g-recaptcha-response':'foo','submit1':'Submit'})
This will return "Invalid ReCaptcha" and this is normal, and what I want
Same thing using aiohttp:
import asyncio
import aiohttp
async def fetch(session, url):
async with session.post(url, data={'company_id':'Dt', 'FirstCardNum1':'foo', 'g-recaptcha-response':'foo','submit1':'Submit'} ) as response:
return await response.text()
async def main():
async with aiohttp.ClientSession() as session:
html = await fetch(session, 'https://merchant.opticard.com/public/do_inquiry.asp')
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main())
Now this also returns "Invalid ReCaptcha", so that's all nice and good.
However now, using trio/asks:
import asks
import trio
async def example():
r = await asks.post('https://merchant.opticard.com/public/do_inquiry.asp', data={'company_id':'Dt', 'FirstCardNum1':'foo', 'g-recaptcha-response':'foo','submit1':'Submit'})
print(r.text)
trio.run(example)
This returns a completely different response with 'Your session has expired to protect your account. Please login again.', this error/message can be accessed normally when inputting an invalid url such as 'https://merchant.opticard.com/do_inquiry.asp' instead of 'https://merchant.opticard.com/public/do_inquiry.asp'.
I have no idea where this error is coming from, I tried setting headers, cookies, encoding, nothing seems to make it work. I tried replicating the issue, but the only way I managed to replicate the result with aiohttp and requests is by setting an incorrect url like 'https://merchant.opticard.com/do_inquiry.asp' instead of 'https://merchant.opticard.com/public/do_inquiry.asp'.
This must be an issue from asks, maybe due to encoding or formatting, but I've been using asks for over a year and never had an issue where a simple post request with data would return differently on asks compared to everywhere else. And I'm baffled as I can't understand why this is happening, it couldn't possibly be a formatting error on asks' part because if so how come this is the first time something like this has ever happened after using it for over a year?
This is a bug how asks handles redirection when a non-standard location is received.
The server returns a 302 redirection with Location: inquiry.asp?... while asks expects it to be a full URL. You may want to file a bug report to asks.
How did I find this? A good way to go is to use a proxy (e.g. mitmproxy) to inspect the traffic. However asks doesn't support proxies. So I turned to wireshark instead and use a program to extract TLS keys so wireshark can decrypt the traffic.