Sending request body to fastapi server while debugging - python-3.x

I have a simple fastapi server which is supposed to accept a request body that consists of a single string. It is supposed to take that string and send it to another module and get the return result, then send that back to the client.
Things on the client side seem to be working fine, but for some reason the call from the server to the other module isn't working. I suspect that I am not actually sending to the module what I think I am, and that I have to modify the call in some way. I want to stop it in the debugger so I can see exactly what the request and response objects look like.
I know how to set it up to debug the server, but not how to then send it a request body while in the debugger.
The server looks like this, with a main block added so I can debug it directly:
import uvicorn
from fastapi import FastAPI
from pydantic import BaseModel
from annotate import AnnotationModule
from fastapi.testclient import TestClient
app = FastAPI()
class Sentence(BaseModel):
text: str
print('Spinning up resources.')
am = AnnotationModule(wsd_only=True)
print('Everything loaded. Ready.')
#app.post("/annotate/")
async def read_item(sentence: Sentence):
return am.annotate(sentence.text)
if __name__ == "__main__":
uvicorn.run(app, host="0.0.0.0", port=8000)
I can set a breakpoint and run this, but how do I send it a request body so I can see it working?
I tried using this client code, but it didn't work:
import requests
# api-endpoint
URL = "http://0.0.0.0:8000/annotate"
PARAMS = {"text": "This is a test sentence."}
# sending get request and saving the response as response object
annotation_dict = requests.post(url = URL, json = PARAMS)
Running this while the server is sitting in the debugger, I get:
requests.exceptions.ConnectionError: HTTPConnectionPool(host='0.0.0.0', port=8000): Max retries exceeded with url: /annotate (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7fd926ed5f90>: Failed to establish a new connection: [Errno 61] Connection refused'))
So, how can I debug it while observing it taking and processing a request? If it matters, I am doing this in vscode.

Related

Slack API bot not recognizing bot event subscriptions

I have a slack bot that, for some reason, stopped recognizing certain bot events. I am using the slackeventsapi to import the SlackEventAdapter, and using it to recognize certain bot events, like reaction_added and reaction_removed. Recently that stopped working altogether on my bot, I reset my bot and it still doesn't work. I am unsure as to why.
I have already checked my slack api bot event subscriptions, and I am subscribed to both those events, so I am unsure as to what exactly is causing this to happen. The bot can still post messages to the chat, it just doesn't recognize certain bot events.
Below is an example of some code I am trying to run. the /test works, the reaction_added event never gets called and nothing gets invoked and never prints the payload.
import os
import slack_sdk
from flask import Flask, request, Response
from dotenv import load_dotenv
from pathlib import Path
from slackeventsapi import SlackEventAdapter
env_path = Path('.') / '.env'
load_dotenv(dotenv_path=env_path)
app = Flask(__name__)
slack_event_adapter = SlackEventAdapter(os.environ['SIGNING_SECRET'],'/slack/events', app)
client = slack_sdk.WebClient(token=os.environ['SLACK_TOKEN'])
#app.route('/test', methods=['POST'])
def test():
print("recognized a test command was sent!")
return Response(), 200
#slack_event_adapter.on("reaction_added")
def reaction(payLoad) -> None:
print(payLoad)
if __name__ == "__main__":
app.run(debug=True, port=8088)
I am running everything on an ngrok server. to better clarify, when I use the /test command on the bot I get a HTTP request POST with return of 200. when I add a reaction to something I do not get an HTTP request, I am unsure as to why.

Is it possible to capture websocket traffic using selenium and python?

we are using selenium on python and as a part of our automation, we need to capture the messages that a sample website sends and receives after the web page loaded completely.
I have check here and it is stated that what we want to do is achievable using BrowserMobProxy but after testing that, the websocket connection did not work on website and certificate errors were also cumbersome.
In another post here it is stated that, this can be done using loggingPrefs of Chrome but it seemed that we only get the logs up to the time when website loads and not the data after that.
Is it possible to capture websocket traffic using only selenium?
Turned out that it can be done using pyppeteer; In the following code, all the live websocket traffic of a sample website is being captured:
import asyncio
from pyppeteer import launch
async def main():
browser = await launch(
headless=True,
args=['--no-sandbox'],
autoClose=False
)
page = await browser.newPage()
await page.goto('https://www.tradingview.com/symbols/BTCUSD/')
cdp = await page.target.createCDPSession()
await cdp.send('Network.enable')
await cdp.send('Page.enable')
def printResponse(response):
print(response)
cdp.on('Network.webSocketFrameReceived', printResponse) # Calls printResponse when a websocket is received
cdp.on('Network.webSocketFrameSent', printResponse) # Calls printResponse when a websocket is sent
await asyncio.sleep(100)
asyncio.get_event_loop().run_until_complete(main())

Flask post request from a route

Is it possible to do a post request from one route in flask to another route? A user sends in data I will look up data then submit that data to another route. Getting Internal Server Error log just has failed (111: Connection refused) while connecting to upstream. I know the route works if i remove the requests line I see found you.
#app.route('/route/<string:uname>', endpoint = 'uname')
def uname():
d={}
d['fname']='data'
d['lname']='data'
requests.post(url='https:site.com/test/', data = data)
return 'found you'
#app.route('test', endpoint = 'test', methods = ['POST','GET'])
def test():
return "submitted to test"
***** Added verify=False to the requests.post line and it just hangs and ties up the service. I have to restart the service for the site to start working.

Can't get SSL to work for a Secure Websocket connection

I'm new to Python but have to build a Websocket API for work.
The website of the websockets module says that this code should work for secure websocket connections (https://websockets.readthedocs.io/en/stable/intro.html)
However, I cannot get the provided code to work..
import websockets
import asyncio
import pathlib
import ssl
ssl_context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
ssl_context.load_verify_locations(pathlib.Path(__file__).with_name('localhost.pem'))
I get the error:
Traceback (most recent call last):
File "/Applications/Python 3.7/apii.py", line 7, in module>
ssl_context.load_verify_locations(pathlib.Path(__ file__).with_name('localhost.pem'))
FileNotFoundError: [Errno 2] No such file or directory
Could you help me out?
PS. I do not get the idea of this ssl_context code at all. Could somebody explain the logic behind it, please?
I test websockets from docker containers. where I have NAT ips (cant use LetsEncrypt), and no * domain certificate. If I test directly in Firefox it works fine without any certificate. So disabling the SSL for testing purpose inside a firewall should be allowed.
Extract:
import asyncio
import websockets
import ssl
:
ssl_context = ssl.create_default_context()
ssl_context.check_hostname = False
ssl_context.verify_mode = ssl.CERT_NONE
async def logIn(uri, myjson):
async with websockets.connect(uri) as websocket:
await websocket.send(myjson)
async def logIn(uri, myjson):
async with websockets.connect(uri, ssl=ssl_context) as websocket:
await websocket.send(myjson)
resp = await websocket.recv()
print(resp)
myurl = f"wss://{args.server}:{args.port}"
print("connection url:{}".format(myurl))
# logdef is the json string I send in
asyncio.get_event_loop().run_until_complete(
logIn(myurl, logdef)
)
Late answer but maybe still interesting for others.
The documentation (1) for this specific example says:
This client needs a context because the server uses a self-signed certificate. A client connecting to a secure WebSocket server with a valid certificate (i.e. signed by a CA that your Python installation trusts) can simply pass ssl=True to connect() instead of building a context.
So if the server-certificate from the server you want to access has a valid certificate just do the following:
uri = "wss://server_you_want_to_connect_to_endpoint"
async with websockets.connect(uri, ssl=True) as websocket:
and so on ...

Trouble passing HTTPS communication through mitmproxy

I'm trying to debug a HTTPS connection using mitmproxy and Python 3. The idea is the following: the proxy is running locally, and I tell Python to use it for a HTTPS connection; Python must accept the self-signed certificate created by mitmproxy for this to work, and if all goes fine, the console of mitmproxy will show me the decoded request/response pair.
I am able to do what I want with Python 3 and requests, but I have to do it using the standard urllib, alas, and I'm failing. Here is the code:
#!/usr/bin/env python3
import urllib.request
import ssl
proxy = urllib.request.ProxyHandler({'https': 'localhost:8080'})
opener = urllib.request.build_opener(proxy)
urllib.request.install_opener(opener)
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
#ssl_ctx = ssl._create_unverified_context()
req = urllib.request.Request('https://github.com')
with urllib.request.urlopen(req) as res:
#with urllib.request.urlopen(req, context=ssl_ctx) as res:
print(res.read().decode('utf8'))
When the above code is executed, I get a
ssl.SSLError: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:645)
error, which means the proxy is used, but its certificate doesn't validate. This is exactly what I expect to happen at this point.
When I use the commented out line to use the ssl_context (create in any of the ways), I get the contents of the page I request, but the proxy console never shows the decoded request information, it stays blank. It is as though using the ssl_context circumvents the proxy altogether.
Could someone please help me out and tell me what I'm doing wrong?
EDIT: just to make sure, I changed the proxy port to 8081, and now the code not using the ssl_ctx variable fails with 'connection refused' (as expected), and the code using the ssl_ctx works fine - this asserts my assumption that the proxy is not used at all.
Thanks for asking this and posting the code you tried. I'm not sure why the documentation claims that fetching HTTPS through a proxy is not supported, because it does work.
Instead of explicitly passing the SSL context, I created and installed an HTTPSHandler in addition to the ProxyHandler. This worked for me (Python 3.5 + mitmproxy):
import urllib.request
import ssl
ssl_ctx = ssl.create_default_context()
ssl_ctx.check_hostname = False
ssl_ctx.verify_mode = ssl.CERT_NONE
ssl_handler = urllib.request.HTTPSHandler(context=ssl_ctx)
proxy_handler = urllib.request.ProxyHandler({'https': 'localhost:8080'})
opener = urllib.request.build_opener(ssl_handler, proxy_handler)
urllib.request.install_opener(opener)
if __name__ == '__main__':
req = urllib.request.Request('https://...')
with urllib.request.urlopen(req) as res:
print(res.read().decode('utf8'))
Once this opener is installed, it's used as the default for all requests using urllib, even in third-party libraries -- which is what I needed.
I'm not sure, though, why your commented out line didn't work. Perhaps passing the context in urlopen installs a new opener that overrides the custom one.

Resources