AWS Lambda returns Unable to import module 'main': No module named 'main' when modules are there - python-3.x

So I'm trying to set up a function in AWS Lambda to run some python code I imported from a zip.
I've edited the handler to run the file then the function I want to run
I've tried having the file in the directory created when I imported the zip folder, after which I I moved it to the main function directory. Neither worked
Not too sure what is wrong here,
the full error returned when I run test is:
Response
{
"errorMessage": "Unable to import module 'main': No module named 'main'",
"errorType": "Runtime.ImportModuleError",
"stackTrace": []
}
Edit: really new to Lambda so please excuse any silly mistakes

The problem is that, while you appear to have a module named main, it has not been deployed to the Lambda service yet. When you click Test, Lambda runs the deployed code. Perhaps your module was renamed to main some time after your initial deployment?
Local changes to code need to be saved and then deployed. The deploy step is important because until you deploy the code, the Lambda service will continue to run the previous code.
This has actually been a common problem historically in the Lambda console, but enhancements have been made to make it more obvious that a deployment is needed. For example the console now indicates "Changes not deployed" after you make a change, until you hit the Deploy button.

I found this question while facing the problem myself. Issue was that the zip put "main.py" in a subfolder.
Hope this is helpful for any others!

Related

Problems Deploying Node App With Render.com

I would like to post a simple stripe integration on render. I apologize in advance for my ignorance on the topic but, since I'm more front-end oriented, it's the first time I've attempted to do such a thing so I would like to ask you
if the repository https://github.com/Luca-Liseros-Ferrari/stripe-example.git is ready to be published, if there is some error that can cause the deploy to fail or if some preliminary operation is required (in the server folder I also have an .env file with the stripe keys and I specified STATIC_DIR = "../client/")
In render.com after clicking on "new" - "web service" and connecting the github repository and considering that from the terminal I start the server.js with the following commands:
cd server
node server.js
how should I fill in the "root directory", "build command" and "start command" fields since it's still not clear to me? Is the root directory the folder that contains the server.js file inside? In my case it would be for example "folderName/server" or simply "server"?
I tried to upload the repository to render but i get the following error message
Failed - Exited with status 1 while running your code.
It also tells me "error cannot find module express"
then I reinstalled express in server folder with npm install express and verified it was already installed. I therefore believe that there is a path error in the phase in which I create the web service.
error snippet
I hope I have provided enough data and I thank anyone who is willing to give me a hand in advance
I solved the problem. I had to specify in render.com in advanced the key - value pairs of my .env file
I noticed it thanks to the Cyclic app which, after loading the repository, warned me that if the app doesn't work it could be because of that
I hope it will help someone

Athenahealth Sandbox - No module named 'athenahealthapi'

I'm trying to play with sandbox on athenahealth - https://docs.athenahealth.com/api/guides/explore-and-prototype - After registering and creating an application, I begin using sandbox - try sandbox. I scroll to the sample code, go to GH >> samplecode >> python3 >> testing.py. When pulling this and running this code I continuously get error
ModuleNotFoundError: No module named 'athenahealthapi'
I am unable to install athenahealthapi
If you are looking at the API code samples you should see one that has a class connection called APIConnection. I believe you need to save that code as a file named athenahealthapi.py in the same folder that you are running your script. That essentially is the athenahealthapi module and it pulls the APIConnection class as athenahealthapi.APIConnection(). I got past that part but am now having trouble with a missing access_token key error. :(

attempted relative import with no known parent package on Google AppEngine with Python3.7

Getting the following error:
"/srv/server.py", line 12, in from .routes.solver import route as solve ImportError: attempted relative import with no known parent package
Deploying the app to AppEngine Standard env, and my project looks like so:
---/
|_app.yaml
|_server.py
|_routes
|_solver.py
In server I do from .routes.solver import route as solve and get the above error in GCP, but not locally.
I tried https://stackoverflow.com/a/16985066/483616 and a few others. Tried with __init__.py at pretty much every level and every location. Then saw that it wasn't needed for python3, so removed. Pretty much unsure what to do now.
Not optimistic that this is the answer but just to throw it into the pot, have you seen Problem with Python relative paths when deploying to Google App Engine Flexible ?

SAM Lambda Layer module not found for shared nodejs code

I'm defining multiple lambda functions in a single template.yaml. These functions have some common code, but not published modules. I assumed I could turn this common stuff into a versioned layer. With a directory to the effect as follows:
Project
LambdaFunc1
package.json
node_modules
func1.js
LambdaFunc2
package.json
node_modules
func2.js
common-stuff
package.json
my-common.js
template.yaml
node_modules
After testing, I copy common-stuff into the Projects/node_modules directory and my other LambdaFuncs resolve require('common-stuff') based on Node moving up the directory structure for not found modules.
To have SAM do the build/package/deploy, I noticed SAM doesn't touch the common-stuff however creates an .aws-sam/build structure with the two other Lambda functions. I had to create a structure for SAM's CodeURI to zip up.
Package/common-stuff/packaged/nodejs/node_modules/common-stuff/ with my package.json and my-common.js.
My package.json uses name: "common-stuff", main: "my-common.js"
There are no other files - nothing under nodejs as I'm only packaging the modules. This appears to me the reason for Layers. I have verified SAM packages a zip file containing nodejs/node_modules/common-stuff/... by downloading the Layer zip file.
In the Lambda function template def, I add the permission to allow 'lambda:GetLayerVersion'. When I view the Lambda function in the console, I see this permission along with the others.
Interestingly, aws lambda get-layer-version-policy --layer-name MyLayer --version-number 8 --output text
returns an error that there are no policies attached. My guess is that is because I've directly added it to the function, as I see it on the Lambda function with the correct Allow/GetLayerVersion.
This would seem to satisfy what I've read, however Node doesn't find the module. CloudWatch logs just say it can't find the module, nothing about permissions or syntax. Also, these functions worked until I added the Layer approach.
'sam local start-api' doesn't work either, same error. When I look in the Windows 10 default layer cache directory C:\Users\me\AppData\Roaming\AWS SAM\ there is an empty layers-pkg directory.
Is there some other magic I'm missing? Is there a better approach for sharing common code across Node Lambda functions?
I can't tell if AWS can't get the Layer, or the zip structure is wrong, or the require('common-stuff') is different (hope not).
Scott

Lambda function failing, no logs generated

I'm playing with this PDF To Image converter and I've cloned the repo, run npm install, changed this section:
var s3EventHandler = new S3EventHandler({
region: 'my-region',
outputBucketName: 'my-bucket-name'
s3: s3,
resolution: 72
});
Renamed it exports.js, zipped up the the js, node_modules folder, package.json and event.json (I've also tried with both of these jsons removed) and uploaded it into my Lambda function. The s3 trigger has been created and so far is working fine.
I've had multiple test failures because it couldn't find a either the async module and tmp module, which I've moved to the top level and it seems to fix it (however it doesn't complain about the other modules that it requires and aren't in the top level).
In the test it complains s3 is not defined which I'm sorta lost with as there isn't a lot of details with it. I thought it could be that I'm just running test so the s3 trigger with itself is missing.
When I upload a pdf into the bucket, Lambda reports that it runs but fails. Going into CloudWatch Logs says there is no log stream for it. I've checked the IAM role and it has permissions to CreateLogStream and PutLogEvents (it was the templated IAM policy).
How can I get my logs working to find the problem? Or what can I do to fix the s3 not defined issue which is my only clue atm? It could be related to the top level module requirement however that doesn't seem consistent as only some modules need to be at the top level?
Looks like "CreateLogGroup" Permission is missing from what you have mentioned. The following permissions are required for lambda to write logs to CloudWatch
"logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents"

Resources