I'm trying to run a microservice on AWS Lambda and because it requires NumPy and pymysql dependencies I've followed the steps outlined here
I receive this error upon uploading the dependencies and code to S3 and trying to run my test functions.
Traceback (most recent call last):
File "/var/runtime/awslambda/bootstrap.py", line 538, in <module>
main()
File "/var/runtime/awslambda/bootstrap.py", line 528, in main
run_init_handler(init_handler, invokeid)
File "/var/runtime/awslambda/bootstrap.py", line 94, in run_init_handler
init_handler()
TypeError: 'module' object is not callable
Any ideas on what could have happened? It runs fine on both my EC2 instance and my local computer
Lambda now has "layers" which could / should help you with that now.
But for others in the future, I had the exact same problem.
I had just finished refactoring a single-file Lambda Python module to a set of files, which included init.py. Turns out that if you have a module named init.py sitting next to a __init__.py in a package, some part of the AWS bootstrap processes can't handle the import or errors relating to the import; whether the file is good during import (your function times out) or bad (the traceback above).
I renamed init.py to connect.py (because I'm only setting up connection information for on-demand connections), then stopped seeing OP's traceback, and was able to move on.
I haven't attempted to reproduce with variations on init*.py module names.
Very strange edge case to run into.
I also came across this in a serverless app and it turns out it was down to how the handler was defined in the serverless.yaml. The bootstrap will take this value and try to execute it. In my case it was not pointing to a function in my python file but was pointing to one of the modules within the file.
functions:
some_lambda:
handler: src/somefile.jwt
In somefile.py there was an import for jwt. Code was meant to call a func jwt_auth but ended up trying to call the jwt module, causing the error seen by the OP.
Related
current_call.transfer("sip:1001#xx.xx.xx.xx") is working python2.7 How ever it is not working in python3.7
below is the error
Original exception was:
Traceback (most recent call last):
File "trycall.py", line 151, in <module>
current_call.transfer("sip:1001#xx.xx.xx.xx")
File "/usr/local/lib/python3.7/dist-packages/pjsua.py", line 1734, in transfer
Lib._create_msg_data(hdr_list))
SystemError: <built-in function call_xfer> returned NULL without setting an error
I am using this branch of PJSIP: Link
for PJSUA implementation with Python 3.6 and have encountered same problem with transfer.
Removing this check from function py_pjsua_call_xfer (pjproject/pjsip-apps/src/python/_pjsua.c) solved my problem:
if (!PyBytes_Check(pDstUri))
return NULL;
This check always returns NULL in all my tests. I was not able to solved this with Python. Removing code mentioned above solved the issue and for now it haven't created any new problems. I have tested this modification with Asterisk and 3 SIP endpoints and transfer was correctly processed.
(note: I am not a C/C++ programmer, so I cannot provide detail explenation why this code doesn't work. This approach is based on trial and error.)
For whatever reason, Python is not allowing me to access a custom method I created in moviepy's preview.py file. I just want to know how to correctly implement it into the file. For reference, before I changed the name of the method, it was working correctly.
I checked at least two __init.py__ files and they were effectively empty. I couldn't find if methods are initialized anywhere, and is probably what I'm missing.
I also tried restarting Git Bash and that didn't work either (another solution I saw).
Original:
#convert_masks_to_RGB
def preview(clip, fps=15, audio=True, audio_fps=22050, audio_buffersize=3000,
audio_nbytes=2, fullscreen=False):
Changed:
#requires_duration
#convert_masks_to_RGB
def preview_custom(clip, marker_overlay="marker_overlay.png", fps=15, audio=True, audio_fps=22050, audio_buffersize=3000,
audio_nbytes=2, fullscreen=False):
There are more than a few differences between the changed and original method, however at the moment the only result I expect is having the method be called correctly. Error is below:
Traceback (most recent call last):
File "T3AJM.py", line 249, in <module>
main()
File "T3AJM.py", line 34, in main
GUI_main_menu()
File "T3AJM.py", line 85, in GUI_main_menu
GUI_play_markers()
File "T3AJM.py", line 125, in GUI_play_markers
video.preview_custom(marker_overlay=TEMP_OVERLAY_FILE)
AttributeError: 'VideoFileClip' object has no attribute 'preview_custom'
Thank you for your time.
I'm not even sure if this technically fixes the problem, but just doing:
from moviepy.video.io.preview import *
and
preview_custom(video, marker_overlay=TEMP_OVERLAY_FILE)
fixed the problem. I have no idea why I had to change the way it was called, as doing clip.preview(), or in this case video.preview() worked perfectly fine before, but whatever.
I have a stacktrace created by the faulthandler after a fatal interpreter crash. Its content looks like below:
File "/path/to/file.py", line <line-number> in <function-name>
File "/path/to/file.py", line <line-number> in <function-name>
I want to create a traceback object from this file, similar to the one from sys.exc_info() to upload it to sentry. Is there any module that will make it easier?
I will not have the scope variables, but it should be possible to capture the code object with content of the files from traceback.
For now the only solution I can think of is to create a class that will behave similar to the traceback object, but this seems like a lot of work (especially if I want the code).
In the end I have prepared my own class that behaves as a traceback object (using duck-typing). The only thing that was important to set valid f_code.co_filename and f_code.co_name and sentry client will extract the source code.
I am running gridSearchCV in parallel with n_jobs > 1, but randomly hit the following crash in joblib:
TypeError: Cannot create a consistent method resolution
order (MRO) for bases JoblibException, Exception
Here is the complete stack trace:
Traceback (most recent call last):
File "example_sklearn.py", line 92, in <module>
main()
File "example_sklearn.py", line 76, in main
).fit(X_train, y_train)
File "/usr/local/lib/python2.7/dist-packages/sklearn/grid_search.py",
line 372, in fit for clf_params in grid for train, test in cv)
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py",
line 516, in __call__self.retrieve()
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/parallel.py",
line 448, in retrieve exception_type = _mk_exception(exception.etype)[0]
File "/usr/local/lib/python2.7/dist-packages/sklearn/externals/joblib/my_exceptions.py",
line 61, in _mk_exception__str__=JoblibException.__str__),
TypeError: Cannot create a consistent method resolution
order (MRO) for bases JoblibException, Exception
Any pointers on what this really is, and how I can debug this. Is this a known issue with sklearn
I had the exact same exception, exactly while using the GridSearchCV.
If you look at the exception, it is complaining about not being able to understand how exactly it should choose between two parent classes JoblibException and Exception. This is a bug in the joblib package, that the inheritance is improper.
But other than than, there exist another problem, which is the source of the exception itself. It's getting an exception while retrieve()ing, and while passing the exception, you get the error.
The second problem (which is the source of the exception), seems to be fixed in later versions of joblib. But scikit-learn is still using an old version (I will submit a pull request with the changed file soon).
A temporary workaround would be to install your own version of joblib using
easy_install joblib
and then go to the sklearn/exterlan folder, remove/rename the joblib folder, and create a symbolic link to your own joblib using:
ln -s /path/to/joblib joblib
EDIT: Seems somebody has had already fixed the problem. My version was also old.
I have searched this site top to bottom yet have not found a single way to actually accomplish what I want in Python3x. This is a simple toy app so I figured I could write some simple test cases in asserts and call it a day. It does generate reports and such so I would like to make sure my code doesn't do anything wonky upon changes.
My current directory structure is: (only relevant parts included)
project
-model
__init__.py
my_file.py
-test
my_file_test.py
I am having a hell of a time getting my_file_test.py to import my_file.py.
Like I've said. I've searched this site top to bottom and no solution has worked. My version of Python is 3.2.3 running on Fedora 17.
Previously tried attempts:
https://stackoverflow.com/questions/5078590/dynamic-imports-relative-imports-in-python-3
Importing modules from parent folder
Can anyone explain python's relative imports?
How to accomplish relative import in python
In virtually every attempt I get an error to the effect of:
ImportError: No module named *
OR
ValueError: Attempted relative import in non-package
What is going on here. I have tried every accepted answer on SO as well as all over the interwebs. Not doing anything that fancy here but as a .NET/Java/Ruby programmer this is proving to be the absolute definition of intuitiveness.
EDIT: If it matters I tried loading the class that I am trying to import in the REPL and I get the following:
>>> import datafileclass
>>> datafileclass.methods
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
>>> x = datafileclass('sample_data/sample_input.csv')
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'module' object is not callable
If it matters...I know the functionality in the class works but I can't import it which in the now is causing an inability to test. In the future will certainly cause integration issues. (names changed to protect the innocent)
getting within a couple of weeks of desired functionality for this iteration of the library...any help could be useful. Would have done it in Ruby but the client wants the Python as a learning experience,
Structure your code like this:
project
-model
__init__.py
my_file.py
-tests
__init__.py
test_my_file.py
Importantly, your tests directory should also be a module directory (have an empty __init__.py file in it).
Then in test_my_file.py use from model import my_file, and from the top directory run python -m tests.test_my_file. This is invoking test_my_file as a module, which results in Python setting up its import path to include your top level.
Even better, you can use pytest or nose, and running py.test will pick up the tests automatically.
I realise this doesn't answer your question, but it's going to be a lot easier for you to work with Python standard practices rather than against them. That means structuring your project with tests in their own top-level directory.