Unable to run celery task directly but still possible via Python console - python-3.x

I'd like to run a simple test (run a task) first via RabbitMQ and once this is setup correctly, then encapsulate in Docker and run from there.
My structure looks like so:
-rabbitmq_docker
- test_celery
- __init__.py
- celeryapp.py
- celeryconfig.py
- runtasks.py
- tasks.py
- docker-compose.yml
- dockerfile
- requirements.txt
celeryconfig.py
## List of modules to import when celery starts
CELERY_IMPORTS = ['test_celery.tasks',] # Required to import module containing tasks
## Message Broker (RabbitMQ) settings
CELERY_BROKER_URL = "amqp://guest#localhost//"
CELERY_BROKER_PORT = 5672
CELERY_RESULT_BACKEND = 'rpc://'
celeryapp.py
from celery import Celery
app = Celery('test_celery')
app.config_from_object('test_celery.celeryconfig', namespace='CELERY')
__init__.py
from .celeryapp import app as celery_app
run_tasks.py
from tasks import reverse
from celery.utils.log import get_task_logger
LOGGER = get_task_logger(__name__)
if __name__ == '__main__':
async_result = reverse.delay("rabbitmq")
LOGGER.info(async_result.get())
tasks.py
from test_celery.celeryapp import app
#app.task(name='tasks.reverse')
def reverse(string):
return string[::-1]
I run celery -A test_celery worker --loglevel=info from the rabbitmq_docker directory. Then in a separate window I trigger reverse.delay("rabbitmq") in the Python console, after importing the required module. This works. Now when I try to trigger the reverse function via the run_tasks.py i.e. python test_celery/run_tasks.py I get:
Traceback (most recent call last):
File "test_celery/run_tasks.py", line 1, in <module>
from tasks import reverse
File "/Users/my_mbp/Software/rabbitmq_docker/test_celery/tasks.py", line 1, in <module>
from test_celery.celeryapp import app
ModuleNotFoundError: No module named 'test_celery'
What I don't get is why this Traceback doesn't get thrown when called directly from the Python console. Could anyone help me out here? I'd eventually like to startup docker, and just run the tests automatically (without going into the Python console).

The problem is simply because your module is not in the Python path.
These should help:
Specify the PYTHONPATH to point to the directory where your test_celery package.
Always run your Python code in the directory where your test_celery package is located.
Or alternatively reorganise your imports...

Related

from azure.eventhub.aio import EventHubConsumerClient ModuleNotFoundError: No module named 'azure'

I try to run the below following code using python3 recv.py on visual studio code but I'm getting the following error
Traceback (most recent call last):
File "recv.py", line 2, in <module>
from azure.eventhub.aio import EventHubConsumerClient
ModuleNotFoundError: No module named 'azure'
import asyncio
from azure.eventhub.aio import EventHubConsumerClient
from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore
async def on_event(partition_context, event):
# Print the event data.
print("Received the event: \"{}\" from the partition with ID: \"{}\"".format(event.body_as_str(encoding='UTF-8'), partition_context.partition_id))
# Update the checkpoint so that the program doesn't read the events
# that it has already read when you run it next time.
await partition_context.update_checkpoint(event)
async def main():
# Create an Azure blob checkpoint store to store the checkpoints.
checkpoint_store = BlobCheckpointStore.from_connection_string("connection_string", "containername")
# Create a consumer client for the event hub.
client = EventHubConsumerClient.from_connection_string("connection_string", consumer_group="$Default", eventhub_name="eventhubinstance", checkpoint_store=checkpoint_store)
async with client:
# Call the receive method. Read from the beginning of the partition (starting_position: "-1")
await client.receive(on_event=on_event, starting_position="-1")
if __name__ == '__main__':
loop = asyncio.get_event_loop()
# Run the main method.
loop.run_until_complete(main())
I try to execute the file on my iTerm terminal and it's working fine. Can you tell me why it is n ot working in vscode?
I'm using Python 3.7.9
I have installed the package using pip3 install azure-eventhub (I have also tried with just pip) but the modules are still interpreting as missing whereas there are not.
Using pip show azure-eventhub WARNING: Package(s) not found: azure-eventhub but it is there I can see the package in /usr/local/lib/python3.10/site-packages

Unable to start Celery in Flask application

I am getting the following error when trying to start a Celery worker (Windows).
I'm using Celery 5.0.5.
celery.exe -A api.app -> ModuleNotFoundError: No module named 'api'
main.py
import os
from api.app import create_app
app = create_app(os.getenv("FLASK_ENV"))
if __name__ == '__main__':
app.run(threaded=True, host='0.0.0.0')
api\app.py
from celery import Celery
from flask import Flask
from flask_redis import FlaskRedis
from flask_restful import Api
from api.config import env_config
redis_client = FlaskRedis()
celery = Celery(__name__, broker="redis://...")
def create_app(config_name):
import resources
app = Flask(__name__)
app.config.from_object(env_config[config_name])
redis_client.init_app(app)
app.config.update(
CELERY_BROKER_URL="redis://...",
CELERY_RESULT_BACKEND="redis://..."
)
celery.conf.update(app.config)
with app.app_context():
from .routes import ccl_routes
from .routes import scan_routes
return app
Folder structure is like so:
helheim
| |api
| |app.py
| |__init.py__
|main.py
What am I doing wrong? It's late here so probably something obvious but can't see it :)
Thanks!
edit: i think u can try celery -A api.app.celery worker before doing any major changes.
not sure how your entire file structure looks like but for example
project
| |_api
| |_app
_...
you should start it with celery -A project.api.app.celery worker. replace .celery to your celery instance name. or you can also refer to this repo https://github.com/mushcatshiro/flask-template.
also note, its a bit challenging to work with celery 4.X on windows from my experience, use celery 5.0 instead.

How to get the correct path for a django script

Here is my arborescence
V1 :
project/
---AppUser/
------models.py, view.Py etc ...
---project/
------settings.py, manage.py etc ...
myscript.py
here my script works perfectly :
import sys
import os
import django
sys.path.append("../../../project")
os.environ["DJANGO_SETTINGS_MODULE"] = "project.settings"
django.setup()
from AppUser.models import Subscription
maps = Subscription.objects.get(uuid="1234565")
print(maps)
It works fine, i launch it from the root of the project ...
But when i want to put my script in a script folder :
V2 :
project/
---AppUser/
------models.py, view.py etc ...
---project/
------settings.py, manage.py etc ...
---script/
------myscript.py
Here is my script :
import sys
import os
import django
sys.path.append("../../../../project")
os.environ["DJANGO_SETTINGS_MODULE"] = "project.settings"
django.setup()
from AppUser.models import Subscription
maps = Subscription.objects.get(uuid="123")
print(maps)
and when i am in script/
and i do a python3 script.Py
I have a :
Traceback (most recent call last):
File "myscript.py", line 12, in <module>
from AppUser.models import Subscription
ModuleNotFoundError: No module named 'AppUser'
error
How to be in script and not having this error ?
The django.setup() seems to works fine, but after it seems to have a problem.
To run the script, you don't have to be in the script folder as you already updated the path in the script.
sys.path.append("../../../../project")
If you want to run from the script folder you can update the path in the script
sys.path.append("../../../project")

Python import from parent directory for dockerize structure

I have a project with two applications. They both use a mongo-engine database model file. Also they have to start in different Docker containers, but use the same Mongo database in the fird container. Now my app structure looks like this:
app_root/
app1/
database/
models.py
main.py
app2/
database/
models.py
main.py
And it works fine, BUT I have to support two same files database/models.py. I dont want to do this and I make the next structure:
app_root/
shared/
database/
models.py
app1/
main.py
app2/
main.py
Unfortunately it doesnt work for me, because when I try this in my main.py:
from ..shared.database.models import *
I get
Exception has occurred: ImportError
attempted relative import with no known parent package
And when I try
from app_root.shared.database.models import *
I get
Exception has occurred: ModuleNotFoundError No module named 'app_root'
Please, what do I do wrong?
In the file you perform the import, try adding this:
import os
import sys
sys.path.append(os.path.abspath('../../..'))
from app_root.shared.database.models import *

ModuleNotFoundError when running imported Flask app

I have a python module with the following layout:
foo
| __init__.py
| __main__.py
| bar.py
__init__.py is empty.
Content of foo/bar.py:
from flask import Flask
app = Flask(__name__)
def baz(): pass
When running python3 -m foo i get confusing results.
Contents of foo/__main__.py
# Results in a ModuleNotFoundError: No module named 'foo'
from foo.bar import app
app.run()
# Raises no error and correctly prints the type
from foo.bar import app
print(type(app))
# Also runs without an error
from foo.bar import baz
baz()
Why is it possible to import and execute a function from this module, but when trying to do the same with a flask app it results in a ModuleNotFoundError?
I just can't see any way this makes any sense.
Edit:
The error is persistent even with this code:
from foo.bar import app
print(type(app))
app.run()
Output:
<class 'flask.app.Flask'>
* Serving Flask app "foo.bar" (lazy loading)
* Environment: production
WARNING: Do not use the development server in a production environment.
Use a production WSGI server instead.
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Traceback (most recent call last):
File "/home/user/projects/ftest/foo/__main__.py", line 1, in <module>
from foo.bar import app
ModuleNotFoundError: No module named 'foo'
So, obviously the module can be imported, because type(app) works just fine and flask does start. It seems like flask does a reload and is messing around with imports somehow.
Edit 2:
I turned debug mode off and it works just fine.
This error only occurs if you set export FLASK_DEBUG=True or explicitly enable debug via app.config["DEBUG"] = True
It turns out it's a bug in werkzeug.
The code works as expected if werkzeug's reloader is disabled.
How to reproduce the behaviour
Directory structure:
foo
| __init__.py
| __main__.py
Content of __init__.py:
from flask import Flask
app = Flask(__name__)
app.config["DEBUG"] = True
Content of __main__.py:
from foo import app
app.run()
If we run it:
$python3 -m foo
* Serving Flask app "foo" (lazy loading)
* Environment: development
* Debug mode: on
* Running on http://127.0.0.1:5000/ (Press CTRL+C to quit)
* Restarting with stat
Traceback (most recent call last):
File "/home/user/projects/ftest/foo/__main__.py", line 1, in <module>
from foo import app
ModuleNotFoundError: No module named 'foo'
If we change __main__.py:
from foo import app
app.run(use_reloader=False)
Everything works just fine.
What's going on
The problem is in werkzeug._reloader.ReloaderLoop.restart_with_reloader. It calls a subprocess with the arguments provided by werkzeug._reloader._get_args_for_reloading but this function does not behave as expected when executing a package via the -m switch.
def _get_args_for_reloading():
"""Returns the executable. This contains a workaround for windows
if the executable is incorrectly reported to not have the .exe
extension which can cause bugs on reloading.
"""
rv = [sys.executable]
py_script = sys.argv[0]
if os.name == 'nt' and not os.path.exists(py_script) and \
os.path.exists(py_script + '.exe'):
py_script += '.exe'
if os.path.splitext(rv[0])[1] == '.exe' and os.path.splitext(py_script)[1] == '.exe':
rv.pop(0)
rv.append(py_script)
rv.extend(sys.argv[1:])
return rv
In our case it returns ['/usr/local/bin/python3.7', '/home/user/projects/ftest/foo/__main__.py']. This is because sys.argv[0] is set to the full path of the module file but it should return ['/usr/local/bin/python3.7', '-m', 'foo']` (At least from my understanding it should and it works this way).
I have no good idea on how to fix this behaviour, or if it is something that need to be fixed. It just seems weird to me that I'm the only one that has encountered this problem, since it doesn't seem too much of a corner case to me.
Adding the following line before app.run() works around the werkzeug reloader bug:
os.environ['PYTHONPATH'] = os.getcwd()
Thanks to #bootc for the tip! https://github.com/pallets/flask/issues/1246
Have you tried from foo import app in your main.py file?

Resources