Error in server-running while acessing web-page - python-3.x

We are assigned to write a simple app including three pages + mainpage that is going to have links to all other pages (content cut for the version here). Below is fully-functional code, that accomplishes all of that. Nevertheless there is an error randomly arising while running server, which, however, does not affect the work of the server at all.
I have tried to look at what is going on with the requests sent to the server and it showed me that periodically instead of one request, server receives two
import socketserver
from server import ThreadedTcpServer
from request import Request
from response import Response
class MyTCPHandler(socketserver.StreamRequestHandler):
def handle(self):
print('Connected from: ' + str(self.client_address))
print('==========Request===========')
request = Request(self.rfile)
print(request.request_line + '\n')
response = Response(self.wfile)
response.add_header('Content-Type', 'text/html')
response.add_header('Connection', 'close')
if request.path == '/':
...
response.send()
ThreadedTcpServer.allow_reuse_address = True
server = ThreadedTcpServer((HOST, PORT), MyTCPHandler)
server.serve_forever()
server.server_close()
In the log of the server I expect to see only request_lines and print of connection info,but sometimes instead of one request it shows two from different client_addresses and two arising exceptions like:
Connected from: ('127.0.0.1', 34962)
==========Request===========
GET /three HTTP/1.1
Connected from: ('127.0.0.1', 34966)
==========Request===========
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 34966)
----------------------------------------
Exception happened during processing of request from ('127.0.0.1', 34962)
and huge traceback starting from socketserver module and ending up in my request class where it cannot parse request_line.

Related

Random ConnectTimeout using aiohttp.ClientSession / httpx.AsyncClient via nginx into aiohttp.web.Application

I am trying to debug random ConnectTimeout happening in our infrastructure.
The symptom: Every now and then we receive a ConnectTimeout.
What I have tested:
Change host file: We changed the host file to route to the public IP, to avoid DNS lookups which may cause similar behaviour. This did not provide any significant benefit.
Change client:
Using aiohttp.ClientSession() - We get that occasionally, instead of a ConnectTimeout that the connection never terminates and remains open forever.
async with aiohttp.ClientSession() as session:
async with session.request(method, url, json=data, headers=headers, timeout=timeout, raise_for_status=False) as resp:
if resp.content_type == "application/json":
data = await resp.json()
else:
data = await resp.text()
if resp.ok:
return data
if isinstance(data, dict) and "detail" in data:
raise RawTextError(data["detail"])
resp.raise_for_status()
Replacing the client with httpx.AsyncClient, I get a ConnectTimeout sometimes and increasing connect timeout removes the connectTimeouts received. Sample Code
async with httpx.AsyncClient(http2=True, timeout=30.0) as client:
result = await client.post(cmdurl, data=json.dumps(wf_event["stages"], cls=JSONEncoder))
res = result.read()
Running the following script without timeouts (Default Httpx timeout is 5 s) I can reproduce the behaviour from multiple servers and locations
import httpx
import asyncio
import datetime
import sys, traceback
async def testurl():
cmdurl = "url
headers = {"authorization" : "token"}
while True:
try:
t0 = datetime.datetime.now()
async with httpx.AsyncClient(http2=True) as client:
#print("Fetching from URL: {}".format(cmdurl))
result = await client.get(cmdurl,headers=headers)
res = result.json()
#print(res)
except Exception as e:
print("Start : ", t0)
print("End : ", datetime.datetime.now())
print(e)
traceback.print_exc(file=sys.stdout)
loop = asyncio.get_event_loop()
loop.run_until_complete(testurl())
Upgrade NGINX: The original Nginx is version 1.18. We created a separate Nginx on 1.21, we created a new URL for the new Nginx and could replicate the ConnectTimeouts with the above httpx script on the new endpoint, indicating that NGINX version is fine and that the load on Nginx is probably not the issue.
Regarding aiohttp.web.Application: Here we have 14 load-balancing dockers in play. I couldn't find anything suggesting a max connection count could be the issue here. I am also not sure if the upstream aiohttp is the issue as I don't see any related issues in the nginx error logs for the corresponding ConnectTimeout.
So while using the Httpx with timeouts almost completely resolved the symptom I still have no Idee why a ConnectTimeout would occur and why it sometimes takes longer than 5 seconds to connect. I haven't been able to reproduce this locally, but then our live service does handle 5000 concurrent connections at any given time.
Hope someone can point me in the direction of where to look.
Thanks for all the help in advance.

How to solve 401 Unauthorized error in Socket.IO Django framework?

I am trying to get the Socket.IO work with my Django server. Here is my setup:
Frontend js:
const socket = io.connect('127.0.0.1:8001');
socket.on('connect', () => {
console.log('socket id: %s\n', socket.id);
});
Django server:
from django.core.wsgi import get_wsgi_application
application = get_wsgi_application()
sio = socketio.Server(async_mode='eventlet', cors_allowed_origins='*', logger=True, engineio_logger=True)
#sio.event
def connect(sid, environ, auth):
print('connect ', sid, auth)
static_files = {
'/public': './static',
}
application = socketio.WSGIApp(sio, application, static_files=static_files)
eventlet.wsgi.server(eventlet.listen(('', 8001)), application)
Dependencies
Django==2.2.11
django-cors-headers==3.0.0
eventlet==0.30.0
gunicorn==19.7.1
python-socketio==4.6.1
...
When I run the js, the server will return 401 unauthorized error before reaching the connect function.
Frontend:
GET http://127.0.0.1:8001/socket.io/?EIO=3&transport=polling&t=NYKlRjO 401 (UNAUTHORIZED)
Django server log:
(11053) accepted ('127.0.0.1', 34906)
127.0.0.1 - - [02/Apr/2021 15:39:31] "GET /socket.io/?EIO=3&transport=polling&t=NYKlTB8 HTTP/1.1" 401 253 0.002482
But the weird thing is if I commented out the connect event, everything like other events work just fine:
# #sio.event
# def connect(sid, environ, auth):
# print('connect ', sid, auth)
The Django server is running on the same port 8001. I don't think there is any authentication check on the connect event or on the socket. Anyone knows why if I setup the connect event and the socket suddenly stop working?
It took me hours to figure this out because of the server response code is irrelevant to the issue here.
The problem is, for my case, when the js trying to connect to the socket server, there is no auth argument so the connect function will raise an exception cases the connection to fail, while all exceptions raise from the conncet function will result in 401 unauthorized although it may not be the authorization issue.
The fix is simple, change the connect definition to:
#sio.event
def connect(sid, environ, auth=''):
print('connect ', sid, auth)
will address the issue. Always assing auth token from the frontend js is a good idea as well.

Getting timedout during smtplib.SMTP(“smtp.gmail.com”, 587) in Python

The following code is working perfectly in another computer:
def send_email(user, pwd, recipient, subject, body):
#lb.send_email('AlirezaFedEx#gmail.com','P&ENLAMS','info#nka-it.com','Test','This is a test')
import smtplib
FROM = 'AlXXX#gmail.com'
TO = ['info#XXX.com'] # recipient if isinstance(recipient, list) else [recipient]
SUBJECT = 'Test'#subject
TEXT = 'This is a test' #body
# Prepare actual message
message = """From: %s\nTo: %s\nSubject: %s\n\n%s
""" % (FROM, ", ".join(TO), SUBJECT, TEXT)
try:
server = smtplib.SMTP('smtp.gmail.com',587)
server.ehlo()
server.starttls()
server.login(user, pwd)
server.sendmail(FROM, TO, message)
server.close()
print('successfully sent the mail')
except Exception as err:
print("failed to send mail:" + str(err))
When running the following part:
server = smtplib.SMTP('smtp.gmail.com',587)
it starts to freeze and eventually gives the following error:
TimeoutError: [WinError 10060] A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond
I checked proxy of my browser and disabled proxy but still getting the error. Any advice please?
are you using proxy? If so, please disable the proxy and try again. This should work properly if you have an active internet connection.

received error: tornado.ioloop.TimeoutError: Operation timed out after 5 seconds.what to do?

Hy. I try to write test for webHander:
import pytest
import tornado
from tornado.testing import AsyncTestCase
from tornado.httpclient import AsyncHTTPClient
from tornado.web import Application, RequestHandler
import urllib.parse
class TestRESTAuthHandler(AsyncTestCase):
#tornado.testing.gen_test
def test_http_fetch_login(self):
data = urllib.parse.urlencode(dict(username='admin', password='123456'))
client = AsyncHTTPClient(self.io_loop)
response = yield client.fetch("http://localhost:8080//#/login", method="POST", body=data)
# Test contents of response
self.assertIn("Automation web console", response.body)
Received error when running test:
raise TimeoutError('Operation timed out after %s seconds' % timeout)
tornado.ioloop.TimeoutError: Operation timed out after 5 seconds
Set ASYNC_TEST_TIMEOUT environment variable.
Runs the IOLoop until stop is called or timeout has passed.
In the event of a timeout, an exception will be thrown. The default timeout is 5 seconds; it may be overridden with a timeout keyword argument or globally with the ASYNC_TEST_TIMEOUT environment variable. -- from http://www.tornadoweb.org/en/stable/testing.html#tornado.testing.AsyncTestCase.wait
You need to use AsyncHTTPTestCase, not just AsyncTestCase. A nice example is in Tornado's self-tests:
https://github.com/tornadoweb/tornado/blob/d7d9c467cda38f4c9352172ba7411edc29a85196/tornado/test/httpclient_test.py#L130-L130
You need to implement get_app to return an application with the RequestHandler you've written. Then, do something like:
class TestRESTAuthHandler(AsyncHTTPTestCase):
def get_app(self):
# implement this
pass
def test_http_fetch_login(self):
data = urllib.parse.urlencode(dict(username='admin', password='123456'))
response = self.fetch("http://localhost:8080//#/login", method="POST", body=data)
# Test contents of response
self.assertIn("Automation web console", response.body)
AsyncHTTPTestCase provides convenient features so you don't need to write coroutines with "gen.coroutine" and "yield".
Also, I notice you're fetching a url with a fragment after "#", please note that in real life web browsers do not include the fragment when they send the URL to the server. So your server would see the URL only as "//", not "//#/login".

Jupyter for WebSocket communication

I'm working on an app which needs to have a WebSockets API, and will also integrate Jupyter (former IPython) notebooks as a relatively minor feature. Since Jupyter already uses WebSockets for communication, how difficult it would be to integrate it as a general library for serving other WebSockets APIs apart to its own? Or am I better off using another library such as aiohttp? I'm looking for any advice and hints abut the best practices for this. Thanks!
You can proxy WebSockets from your main application to Jupyter.
It really doesn't matter what technology you use to serve WebSockets, the proxy loop will be very similar (wait for message, push message forward). However, it will be web server dependent as Python does not have standard to WebSockets akin WSGI.
I did one in pyramid_notebook project. Running Jupyter in its own process is must as, at least by the time of writing the code, embedding Jupyter directly to your application was not feasible. I am not sure though if the latest version have changed this. Jupyter itself was using Tornado.
"""UWSGI websocket proxy."""
from urllib.parse import urlparse, urlunparse
import logging
import time
import uwsgi
from pyramid import httpexceptions
from ws4py import WS_VERSION
from ws4py.client import WebSocketBaseClient
#: HTTP headers we need to proxy to upstream websocket server when the Connect: upgrade is performed
CAPTURE_CONNECT_HEADERS = ["sec-websocket-extensions", "sec-websocket-key", "origin"]
logger = logging.getLogger(__name__)
class ProxyClient(WebSocketBaseClient):
"""Proxy between upstream WebSocket server and downstream UWSGI."""
#property
def handshake_headers(self):
"""
List of headers appropriate for the upgrade
handshake.
"""
headers = [
('Host', self.host),
('Connection', 'Upgrade'),
('Upgrade', 'WebSocket'),
('Sec-WebSocket-Key', self.key.decode('utf-8')),
# Origin is proxyed from the downstream server, don't set it twice
# ('Origin', self.url),
('Sec-WebSocket-Version', str(max(WS_VERSION)))
]
if self.protocols:
headers.append(('Sec-WebSocket-Protocol', ','.join(self.protocols)))
if self.extra_headers:
headers.extend(self.extra_headers)
logger.info("Handshake headers: %s", headers)
return headers
def received_message(self, m):
"""Push upstream messages to downstream."""
# TODO: No support for binary messages
m = str(m)
logger.debug("Incoming upstream WS: %s", m)
uwsgi.websocket_send(m)
logger.debug("Send ok")
def handshake_ok(self):
"""
Called when the upgrade handshake has completed
successfully.
Starts the client's thread.
"""
self.run()
def terminate(self):
super(ProxyClient, self).terminate()
def run(self):
"""Combine async uwsgi message loop with ws4py message loop.
TODO: This could do some serious optimizations and behave asynchronously correct instead of just sleep().
"""
self.sock.setblocking(False)
try:
while not self.terminated:
logger.debug("Doing nothing")
time.sleep(0.050)
logger.debug("Asking for downstream msg")
msg = uwsgi.websocket_recv_nb()
if msg:
logger.debug("Incoming downstream WS: %s", msg)
self.send(msg)
s = self.stream
self.opened()
logger.debug("Asking for upstream msg")
try:
bytes = self.sock.recv(self.reading_buffer_size)
if bytes:
self.process(bytes)
except BlockingIOError:
pass
except Exception as e:
logger.exception(e)
finally:
logger.info("Terminating WS proxy loop")
self.terminate()
def serve_websocket(request, port):
"""Start UWSGI websocket loop and proxy."""
env = request.environ
# Send HTTP response 101 Switch Protocol downstream
uwsgi.websocket_handshake(env['HTTP_SEC_WEBSOCKET_KEY'], env.get('HTTP_ORIGIN', ''))
# Map the websocket URL to the upstream localhost:4000x Notebook instance
parts = urlparse(request.url)
parts = parts._replace(scheme="ws", netloc="localhost:{}".format(port))
url = urlunparse(parts)
# Proxy initial connection headers
headers = [(header, value) for header, value in request.headers.items() if header.lower() in CAPTURE_CONNECT_HEADERS]
logger.info("Connecting to upstream websockets: %s, headers: %s", url, headers)
ws = ProxyClient(url, headers=headers)
ws.connect()
# TODO: Will complain loudly about already send headers - how to abort?
return httpexceptions.HTTPOk()

Resources