I'm somewhat of a n00b to Python and I'm working on a tiny project. This is my code in src/sock.py
import socket
import config
class Server(socket.socket):
def __init__(self):
socket.socket.__init__(self, socket.AF_INET, socket.SOCK_STREAM)
def start(self):
self.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
self.bind((config.bind_host, config.bind_port))
self.listen(5)
while True:
pass
and my code in start.py
import src
Socket = src.sock
Socket.Server()
Socket.Server.start
but the Server doesn't seem to be starting. :(
Any help would be much appreciated
Your code:
Socket.Server()
will create a server instance. But since you don't assign that created instance to a variable, you can't use it or reach it (and it will be garbage collected very quickly).
Socket.Server.start
accesses the start method on the Server class (not the created instance, but the class). But again, you don't do anything with it. You don't call it, you don't assign it to anything. So it is in effect a noop.
You need to assign the created server instance to a variable, and then call the start method on that instance. Like so:
server = Socket.Server()
server.start()
Related
Using Python 3.7, I made a CLI utility which prints some results to stdout. Depending on an option the results should be visualized in a browser (single user, no sessions). Flask seems to be a good choice for this. However, this is not a standard usecase described in the docs or in tutorials.
I am looking for a best practise way to pass the data (e.g. a Python List) to the Flask app so that I can return it from view functions. Basically it would be immutable application data. The following seems to work but I don't like to use globals:
main.py:
import myapp
result = compute_stuff()
if show_in_browser:
myapp.data = result
myapp.app.run()
myapp.py:
from flask import Flask
from typing import List
app = Flask(__name__)
result: List
#app.route("/")
def home():
return f"items: {len(result)}"
Reading the Flask docs I get the impression I should use an application context. On the other hand, its lifetime does not span across requests and I would not know how to populate it. Reading other questions I might use a Flask config object because it seems to be available on every request. But this is not really about configuration. Or maybe I should use Klein, inspired by this answer?
There does not seem to be a best practice way. So I am going with a modification of my original approach:
class MyData:
pass
class MyApp(Flask):
def __init__(self) -> None:
super().__init__(__name__)
self.env = "development"
self.debug = True
def getData(self) -> MyData:
return self._my_data
def setMyData(self, my_data: MyData) -> None:
self._my_data = my_data
app = MyApp()
This way I can set the data after the app instance was already created - which is necessary to be able to use it in routing decorators defined outside of the class. It would be nice to have more encapsulation: use app methods for routing (with decorators) instead of module global functions accessing a module global app object. Apparently that is not flaskic.
I want to create a simple tool but fail to register it correctly. As soon as I add it to any method I get the error:
AttributeError: 'Toolbox' object has no attribute 'authenticate'
I tried
cherrypy.tools.authenticate = cherrypy.Tool('before_handler', authenticate)
and
#cherrypy.tools.register('before_handler')
def authenticate():
The issue I likely have is placing the function in the wrong place. I have a main file launching the server and all apps:
#config stuff
if __name__ == '__main__':
cherrypy.engine.unsubscribe('graceful', cherrypy.log.reopen_files)
logging.config.dictConfig(LOG_CONF)
cherrypy.tree.mount(app1(), '/app1')
cherrypy.tree.mount(app2(), '/app2')
cherrypy.quickstart(app3)
This file is launched by a systemd unit.
If I put the authenticate function in the config area, it doesn't work. If i put it in one of the apps directly and only use it in that app, it doesn't work. Always the same error.
So where do I have to place it to make this work?
Another case of me falling into the python definition order matters trap.
Doesn't work:
class MyApp(object):
#....
#cherrypy.tools.register('on_start_resource')
def authenticate():
#....
Works:
#cherrypy.tools.register('on_start_resource')
def authenticate():
#....
class MyApp(object):
I'm trying to use the wonder apscheduler in a pyarmid api. The idea is to have a background job run regularly, while we still query the api for the result from time to time. Basically I use the job in a class as:
def my_class(object):
def __init__(self):
self.current_result = 0
scheduler = BackGroundScheduler()
scheduler.start()
scheduler.add_job(my_job,"interval", id="foo", seconds=5)
def my_job():
print("i'm updating result")
self.current_result += 1
And outside of this class (a service for me), the api has a POST endpoint that returns my_class instance's current result:
class MyApi(object):
def __init__(self):
self.my_class = MyClass()
#view_config(request_method='POST')
def my_post(self):
return self.my_class.current_result
When everything runs, I see the prints and incrementation of value inside the service. But current_result stays as 0 when gathered from the post.
From what I know of the threading, I guess that the update I do is not on the same object my_class but must be on a copy passed to the thread.
One solution I see would be to update the variable in a shared intermediate (write on disk, or in a databse). But I wondered if that would be possible to do in memory.
I manage to do exactly this when I do this in a regular script, or with one script and a very simple FLASK api (no class for the API there) but I can't manage to have this logic function inside the pyramid api.
It must be linked to some internal of Pyramid spawning my api endpoint on a different thread but I can't get right on the problem.
Thanks !
=== EDIT ===
I have tried several things to solve the issue. First, the instance of MyClass used is intitialized in another script, follow a container pattern. That container is by default contained in all MyApi instances of pyramid, and supposed to contain all global variables linked to my project.
I also define a global instance of MyClass just to be sure, and print its current result value to compare
global_my_class = MyClass()
class MyApi(object):
def __init__(self):
pass
#view_config(request_method='POST')
def my_post(self):
print(global_my_class.current_result)
return self.container.my_class.current_result
I check using debug that MyClass is only spawned twice during the api execution (one for the global variable, one inside the container. However.
So what I see in logging are two values of current_result getting incremented, but at each calls of my_post I only get 0s.
An instance of view class only lives for the duration of the request - request comes in, a view class is created, produces the result and is disposed. As such, each instance of your view gets a new copy of MyClass() which is separate from the previous requests.
As a very simple solution you may try defining a global instance which will be shared process-wide:
my_class = MyClass()
class MyApi(object):
#view_config(request_method='POST')
def my_post(self):
return my_class.current_result
I'm trying to catch the flow_finished signal from django viewflow like this
flow_finished.connect(function)
but it's not working. The function isn't called even if the flow finishes.
Any help please, I'm pretty lost.
In my app's init.py I added this
from django.apps import AppConfig
default_app_config = 'test.TestConfig'
class TestConfig(AppConfig):
name = 'test'
verbose_name = 'Test'
def ready(self):
import viewflow.signals
First, you need to ensure that you properly configured you app config, and the ready method really been called. Check your installed apps that you properly included your TestConfig, or if you use shortcuts, check you test/__init__.py default_app_config value
from viewflow.signals import flow_finished
def receiver(sender, **kwargs):
print('hi')
class TestConfig(AppConfig):
name = 'test'
def ready(self):
flow_finished.connect(receiver)
But generally, using signals to weave your codebase is a bad taste. To call an action before flow.End you can explicitly add flow.Handler. That's the recommended solution.
I'm using web.py framework. For debugging purposes, I'd like to force all requests to be handled by a single thread, or simulate such behaviour with a mutex. How can I do that?
Let me suggest something like this, but it will lock only current application stack over your controller method.
import web
from threading import Lock
urls = ("/", "Index")
class Index:
def GET(self):
# This will be locked
return "hello world"
def mutex_processor():
mutex = Lock()
def processor_func(handle):
mutex.acquire()
try:
return handle()
finally:
mutex.release()
return processor_func
app = web.application(urls, globals())
app.add_processor(mutex_processor())
if __name__ == "__main__":
app.run()
UPD: if you need to lock the whole application stack then you probably have to wrap app.wsgifunc with your own WSGI middleware. To get an idea check my answer to this question.
To get things decently into a single thread debugging mode, the web.py app can be run with a single threaded WSGI server.
Such server is "almost" offered by web.py itself as web.httpserver.runbasic() which uses Python's builtin BaseHTTPServer.HTTPServer - but also SocketServer.ThreadingMixIn .
This ThreadingMixIn however can be blocked by something like this:
# single threaded execution of web.py app
app = web.application(urls, globals())
# suppress ThreadingMixIn in web.httpserver.runbasic()
import SocketServer
class NoThreadingMixIn:
pass
assert SocketServer.ThreadingMixIn
SocketServer.ThreadingMixIn = NoThreadingMixIn
web.httpserver.runbasic(app.wsgifunc())
Or you could replicate the rather short web.httpserver.runbasic() code.