I am new to Python and trying to install Airflow in my Mac, by following this tutorial
While these two commands work fine:
$ airflow initdb
$ airflow webserver -p 8080
The scheduler command (airflow scheduler) throws the following error:
[2020-02-18 13:18:09,012] {scheduler_job.py:1382} ERROR - Exception when executing execute_helper Traceback (most recent call last):
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1380, in _execute
self._execute_helper()
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1413, in _execute_helper
self.processor_agent.start()
File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/dag_processing.py", line 554, in start
self._process.start()
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
return Popen(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Library/Frameworks/Python.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
AttributeError: Can't pickle local object 'SchedulerJob._execute.<locals>.processor_factory'
[2020-02-18 13:18:09,035] {helpers.py:322} INFO - Sending Signals.SIGTERM to GPID None
Traceback (most recent call last): File "/Users/mac/Workspace/airflow/airflow_venv/bin/airflow", line 37, in <module>
args.func(args) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/cli.py", line 75, in wrapper
return f(*args, **kwargs) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/bin/cli.py", line 1040, in scheduler
job.run() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/base_job.py", line 221, in run
self._execute() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/jobs/scheduler_job.py", line 1384, in _execute
self.processor_agent.end() File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/dag_processing.py", line 707, in end
reap_process_group(self._process.pid, log=self.log) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/helpers.py", line 324, in reap_process_group
signal_procs(sig) File "/Users/mac/Workspace/airflow/airflow_venv/lib/python3.8/site-packages/airflow/utils/helpers.py", line 293, in signal_procs
os.killpg(pgid, sig)
TypeError: an integer is required (got type NoneType)
EDIT: Python 3.8 is supported now https://github.com/apache/airflow#requirements. So this answer might not be relevant now.
This due to the Python version you are using. Airflow doesn't support Python 3.8 yet https://github.com/apache/airflow#stable-version-1109.
Downgrade your Python to 3.7 and check.
Maybe there are some compatibility problems?
Using Python 3.6.10 and airflow v1.10.4, I can get airflow running. Maybe you could try some other versions?
This worked for me!
1- Make sure you are using the correct celery version that supports your other packages like RabbitMQ ( as V5 doesn't support AMQP in its usual format), my advice is to use V4.6.X
2-THIS HAS NOTHING TO DO WITH PYTHON VERSION IF YOU ARE USING AIRFLOW V2.0
3- simply make yourself happy with airflow db reset (command may differ if you are using airflow Version X<2.0 )
4- Avoid deleting any dag like you delete a file and use airflow dag ... commands to do so. (it makes up a mess in your environment that you wont like, trust me on this..)
Wish you luck bearing python stuff..
Related
I am trying to run a simple lambda function using AWS sam, version(1.57.0)
I've installed in my ubuntu system nodejs version 14.18.3
When I try to run the project it gives errorUnsupported Lambda runtime nodejs18.x
Below is the full stacktrace
Invoking index.handler (nodejs18.x)
Traceback (most recent call last):
File "samcli/__main__.py", line 12, in <module>
File "click/core.py", line 829, in __call__
File "click/core.py", line 782, in main
File "click/core.py", line 1259, in invoke
File "click/core.py", line 1259, in invoke
File "click/core.py", line 1066, in invoke
File "click/core.py", line 610, in invoke
File "click/decorators.py", line 73, in new_func
File "click/core.py", line 610, in invoke
File "samcli/lib/telemetry/metric.py", line 176, in wrapped
File "samcli/lib/telemetry/metric.py", line 126, in wrapped
File "samcli/lib/utils/version_checker.py", line 41, in wrapped
File "samcli/cli/main.py", line 86, in wrapper
File "samcli/commands/local/invoke/cli.py", line 106, in cli
File "samcli/commands/local/invoke/cli.py", line 183, in do_cli
File "samcli/commands/local/lib/local_lambda.py", line 144, in invoke
File "samcli/lib/telemetry/metric.py", line 240, in wrapped_func
File "samcli/local/lambdafn/runtime.py", line 177, in invoke
File "samcli/local/lambdafn/runtime.py", line 88, in create
File "samcli/local/docker/lambda_container.py", line 91, in __init__
ValueError: Unsupported Lambda runtime nodejs18.x
[43955] Failed to execute script __main__
I did have node version 18 installed in the system prior to this. I thought that may be giving the issue so I uninstalled that version and installed version 14.
I don't have any idea why sam is running it on node version 18
I just recently looked into this as well since Node 18 is the current LTS. If you go to the Serverless Image Repository you'll see that AWS SAM doesn't currently have an image for Node 18. There's an explanation about this in this Github issue.
Environment(s)
Ubuntu 20.04 & Debian 10 with Python 3.8 or 3.7, respectively.
Postgresql versions 11, 12, and 14 have been tried.
Psycopg2-binary 2.8.6
Overview
I'm attempting to install a Django project, and I'm getting this error:
psycopg2.errors.UndefinedFile: could not open extension control file "/usr/share/pgsql/extension/citext.control": No such file or directory
The psycopg devs informed me this is likely an issue with the postgresql-contrib libraries. Similarly, others have been able to fix this error by installing postgresql-contrib, however, this does not work for me. I've also tried installing postgresql-12.
I can see that citext.control is available in /usr/share/postgresql/12/extension/citext.control, so I tried ln -s /usr/share/postgresql/12 /usr/share/pgsql with no effect.
I also ran CREATE EXTENSION citext; in Postgres, also without effect.
Any support with this would be greatly appreciated, as I was hoping to have this project live already!
Thanks so much.
Trace
Running migrations:
Applying core.0043_install_ci_extension_pg...Traceback (most recent call last):
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
psycopg2.errors.UndefinedFile: could not open extension control file "/usr/share/pgsql/extension/citext.control": No such file or directory
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "manage.py", line 16, in <module>
execute_from_command_line(sys.argv)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/__init__.py", line 364, in execute_from_command_line
utility.execute()
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/__init__.py", line 356, in execute
self.fetch_command(subcommand).run_from_argv(self.argv)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/base.py", line 283, in run_from_argv
self.execute(*args, **cmd_options)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/user/janeway/src/utils/management/commands/install_janeway.py", line 58, in handle
call_command('migrate')
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/__init__.py", line 131, in call_command
return command.execute(*args, **defaults)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/base.py", line 330, in execute
output = self.handle(*args, **options)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/core/management/commands/migrate.py", line 202, in handle
post_migrate_state = executor.migrate(
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/migrations/executor.py", line 115, in migrate
state = self._migrate_all_forwards(state, plan, full_plan, fake=fake, fake_initial=fake_initial)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/migrations/executor.py", line 145, in _migrate_all_forwards
state = self.apply_migration(state, migration, fake=fake, fake_initial=fake_initial)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/migrations/executor.py", line 244, in apply_migration
state = migration.apply(state, schema_editor)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/migrations/migration.py", line 129, in apply
operation.database_forwards(self.app_label, schema_editor, old_state, project_state)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/contrib/postgres/operations.py", line 17, in database_forwards
schema_editor.execute("CREATE EXTENSION IF NOT EXISTS %s" % schema_editor.quote_name(self.name))
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/backends/base/schema.py", line 136, in execute
cursor.execute(sql, params)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/utils.py", line 94, in __exit__
six.reraise(dj_exc_type, dj_exc_value, traceback)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/utils/six.py", line 685, in reraise
raise value.with_traceback(tb)
File "/home/user/venvs/janeway/lib/python3.8/site-packages/django/db/backends/utils.py", line 64, in execute
return self.cursor.execute(sql, params)
django.db.utils.OperationalError: could not open extension control file "/usr/share/pgsql/extension/citext.control": No such file or directory
Have you tried installing python3-psycopg2 from the packaging system instead?
My Ubuntu 20 setup has this installed, and postgresql connections work with django.
ii python3-psycopg2 2.8.4-2 amd64 Python 3 module for PostgreSQL
I had installed postgresql-contrib on the local django server rather than the remote DB server. Installing on the same server as Postgresql resolved the issue.
I have a problem during the execution of my python script from crontab, which consists of an insert operation in the firestore database.
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
When I execute normally with python3 script.py command it works, but when I execute it from crontab it return the following error:
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/home/axatel/angel_bridge/esportazione_firebase/main.py", line 23, in <module>
dato.getDati(dato, db, cursor, cursor2, fdb, select, anagrafica)
File "/home/axatel/angel_bridge/esportazione_firebase/dati.py", line 19, in getDati
db.collection(u'ab').document(str(row["Name"])).collection(str(row["id"])).document(str(row2["id"])).set(self.packStructure(row2))
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/document.py", line 234, in set
write_results = batch.commit()
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/batch.py", line 147, in commit
metadata=self._client._rpc_metadata,
File "/home/axatel/.local/lib/python3.7/site-packages/google/cloud/firestore_v1/gapic/firestore_client.py", line 1121, in commit
request, retry=retry, timeout=timeout, metadata=metadata
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/gapic_v1/method.py", line 145, in __call__
return wrapped_func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 286, in retry_wrapped_func
on_error=on_error,
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/retry.py", line 184, in retry_target
return target()
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/timeout.py", line 214, in func_with_timeout
return func(*args, **kwargs)
File "/home/axatel/.local/lib/python3.7/site-packages/google/api_core/grpc_helpers.py", line 59, in error_remapped_callable
six.raise_from(exceptions.from_grpc_error(exc), exc)
File "<string>", line 3, in raise_from
google.api_core.exceptions.ServiceUnavailable: 503 DNS resolution failed for service: firestore.googleapis.com:443
I really don't understand what's the problem, because the connection at the database works every time the script is started in both ways.
Is there a fix for this kind of issue?
I found something that might be helpful. There is nice troubleshooting guide and there is a part there, which seems to be related:
If your command works by invoking a runtime like python some-command.py perform a few checks to determine that the runtime
version and environment is correct. Each language runtime has quirks
that can cause unexpected behavior under crontab.
For python you might find that your web app is using a virtual
environment you need to invoke in your crontab.
I haven't seen such error running Firestore API, but this seems to match to your issue.
I found the solution.
The problem occured because the timeout sleep() value was lower than expected, so the database connection function starts too early during boot phase of machine. Increasing this value to 45 or 60 seconds fixed the problem.
#time.sleep(10) # old version
time.sleep(60) # working version
fdb = firebaseConnection()
def firebaseConnection():
# firebase connection
cred = credentials.Certificate('/database/axatel.json')
firebase_admin.initialize_app(cred)
fdb = firestore.client()
if fdb:
return fdb
else:
print("Error")
sys.exit()
My local Rethink database disappeared after upgrading to MacOS 11 (“Big Sur”). But that's alright, as I have a backup. But I can't, for the life of me, restore it.
rethinkdb restore [file] and rethinkdb import [file] didn't work, but after some googling I found that I had to install the python tools (pip3 install rethinkdb). I did that, but now I get this error:
% rethinkdb restore rethinkdb_dump_2020-11-21T20:35:20.tar.gz > bug.log
Traceback (most recent call last):
File "/usr/local/bin/rethinkdb-restore", line 10, in <module>
sys.exit(main())
File "/Users/$USER/Library/Python/3.8/lib/python/site-packages/rethinkdb/_restore.py", line 339, in main
do_restore(options)
File "/Users/$USER/Library/Python/3.8/lib/python/site-packages/rethinkdb/_restore.py", line 315, in do_restore
_import.import_tables(options, sources)
File "/Users/$USER/Library/Python/3.8/lib/python/site-packages/rethinkdb/_import.py", line 1359, in import_tables
progress_bar.start()
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/process.py", line 121, in start
self._popen = self._Popen(self)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 224, in _Popen
return _default_context.get_context().Process._Popen(process_obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/context.py", line 283, in _Popen
return Popen(process_obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 32, in __init__
super().__init__(process_obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/popen_spawn_posix.py", line 47, in _launch
reduction.dump(process_obj, fp)
File "/Library/Developer/CommandLineTools/Library/Frameworks/Python3.framework/Versions/3.8/lib/python3.8/multiprocessing/reduction.py", line 60, in dump
ForkingPickler(file, protocol).dump(obj)
TypeError: cannot pickle '_thread._local' object
I've tried to uninstall RethinkDB and the Python tools and reinstall both. Same outcome.
I was able to downgrade to python 3.7, reinstall the rethinkdb python driver and made it further in the rethinkdb restore process
This question already has answers here:
Jupyter Notebook with Python 3.8 - NotImplementedError
(4 answers)
Closed 3 years ago.
Here is the detailed error when launching jupyter notebook with python version 3.8
File "c:\users\kokat\appdata\local\programs\python\python38\lib\runpy.py", line 192, in _run_module_as_main
return _run_code(code, main_globals, None,
File "c:\users\kokat\appdata\local\programs\python\python38\lib\runpy.py", line 85, in _run_code
exec(code, run_globals)
File "C:\Users\kokat\AppData\Local\Programs\Python\Python38\Scripts\jupyter-notebook.EXE\__main__.py", line 9, in <module>
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\jupyter_core\application.py", line 266, in launch_instance
return super(JupyterApp, cls).launch_instance(argv=argv, **kwargs)
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\traitlets\config\application.py", line 657, in launch_instance
app.initialize(argv)
File "<c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\decorator.py:decorator-gen-7>", line 2, in initialize
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\traitlets\config\application.py", line 87, in catch_config_error
return method(app, *args, **kwargs)
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\notebook\notebookapp.py", line 1628, in initialize
self.init_webapp()
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\notebook\notebookapp.py", line 1407, in init_webapp
self.http_server.listen(port, self.ip)
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\tornado\tcpserver.py", line 144, in listen
self.add_sockets(sockets)
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\tornado\tcpserver.py", line 157, in add_sockets
self._handlers[sock.fileno()] = add_accept_handler(
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\tornado\netutil.py", line 268, in add_accept_handler
io_loop.add_handler(sock, accept_handler, IOLoop.READ)
File "c:\users\kokat\appdata\local\programs\python\python38\lib\site-packages\tornado\platform\asyncio.py", line 79, in add_handler
self.asyncio_loop.add_reader(
File "c:\users\kokat\appdata\local\programs\python\python38\lib\asyncio\events.py", line 498, in add_reader
raise NotImplementedError
NotImplementedError
Any helps would be appreciated.
Following one of the comments in this thread, I've solved the problem by:
Locate and open the asyncio.py file. In my case it's in C:\Users\[USERNAME]\AppData\Local\Programs\Python\Python38-32\Lib\site-packages\tornado\platform\
After the line import asyncio add the following:
from sys import platform
if platform == "win32":
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy()) # python-3.8.0a4
This will force using SelectorEventLoop on Windows.
Jupyter notebook runs as a tornado web server. Your browser connect to this tornado server via a socket.
The I/O of the socket are handled by tornado's asyncio, which relies on the add_reader implementation of the native asyncio module of python runtime. As pointed out in the documentation of asyncio, this method is only supported with Windows SelectorEventLoop so make sure you are using this kind of event loop in your python installation. To find out which eventloop implementation is in usage, you can run the following commands in python shell:
import asyncio
print(asyncio.get_event_loop().__class__)
# Output: <class 'asyncio.windows_events._WindowsSelectorEventLoop'>
There is an ongoing discussion about allowing user to change the EventLoopPolicy of asyncio in jupyter's configuration file.