Unable to import module 'lambda_function': No module named 'sendgrid' - python-3.x

I am using AWS Lambda, Python 3.7 and SendGrid API and want to send an email but getting the error:
"Unable to import module 'lambda_function': No module named 'sendgrid'"
Is there a way to resolve this? I can see that in some similar issues that the module would be imported from somewhere but can't work out from where.
My lambda code is just the sample code from the SendGrid website which values updated with the ones I want to use:
import json
import sendgrid
import os
from sendgrid.helpers.mail import *
def lambda_handler(event, context):
sg = sendgrid.SendGridAPIClient(apikey=os.environ.get('SENDGRID_API_KEY'))
from_email = Email("test#example.com")
to_email = Email("****")
subject = "Sending with SendGrid is Fun"
content = Content("text/plain", "and easy to do anywhere, even with Python")
mail = Mail(from_email, subject, to_email, content)
response = sg.client.mail.send.post(request_body=mail.get())
print(response.status_code)
print(response.body)
print(response.headers)
Thanks

The lambda environment does not have the sendgrid module available for your code to call. In order to use dependencies that aren't apart of the aws sdk or the language (like the sendgrid library) you have to pre-build the code with the packages locally and upload a zip file. An example can be found here: aws python lambda . There is another stack overflow addressing the same question here. The second contains some tools to make uploading easier.

Related

import twilio's Authy library in nest js

we usually use below statement to use authy library in node file using js ,mostly by require statement !
const authy = require('authy')('API KEY');
I've moved my code to nest eco system and now How should i do the same using typescript ,as i also want to pass API Key to it ?
I've tried below code as well ,but still it's not working
import { authy } from 'authy'(API KEY)
suggest something !
I have faced a similar issue in my NestJS project when using twillio library.
Currently, I have resolved this by importing it this way:
import authy = require('authy');
If, this doesn't work for you (for any reason e.g. TypeScript compile error), then can you try the following import statement?
import * as Authy from 'authy';
Also, let me know which one works for you.

Trying to get some trace information from a Python Google Cloud Function into Cloud Trace

I have a couple of Cloud Functions that make remote calls out to a 3rd party API and I would like to be able to gather latency metrics in Cloud Trace for those calls.
I'm trying to find a barebones working piece of example code that I can build off of. Only one I found is at https://medium.com/faun/tracing-python-cloud-functions-a17545586359
It is essentially.
import requests
from opencensus.trace import tracer as tracer_module
from opencensus.trace.exporters import stackdriver_exporter
from opencensus.trace.exporters.transports.background_thread \
import BackgroundThreadTransport
PROJECT_ID = 'test-trace'
# instantiate trace exporter
exporter = stackdriver_exporter.StackdriverExporter(project_id=PROJECT_ID, transport=BackgroundThreadTransport)
def main_fun(data, context):
tracer = tracer_module.Tracer(exporter=exporter)
with tracer.span(name='get_token'):
print('Getting Token')
authtoken = get_token(email,password)
print('Got Token')
def get_token(email,password):
# Make some request out to an API and get a Token
return accesstoken
There are no errors and everything works as intended, minus a trace not showing up in Cloud Trace or Stackdriver.
Am I doing something wrong here? Does anyone have some simple code that may work for Cloud Trace within a Cloud Function

How we can retrieve real-time database from firebase to Python3 in Jupyter-Notebook (Anaconda)

Whenever I try to retrieve data from firebase to my .ipynb. It gives me error that there is not such module like firebase_admin. When I installed that module to my conda environment. Again, It says that it is only available for Python2, but I want in Python3. Is there any way please answer:
from firebase import firebase #firebase data retrieving code #live
firebase = firebase.FirebaseApplication('LINK-TO-DATABASE', None)
result = firebase.get('/user', None)
result # type(result)=dict, JSON formate

How to invoke an AWS Lambda function from EC2 instances with python asyncio

I recently posted a question about How to allow invoking an AWS Lambda function only from EC2 instances inside a VPC.
I managed to get it working by attaching an IAM role with an "AWS lambda role" policy to the EC2 instances and now I can invoke the lambda function using boto3.
Now, I would like to make the call to the lambda function asynchronously using the asyncio await syntax. I read that the lambda function offers an asynchronous version by setting InvokeType='Event', but that actually makes the call return immediately without getting the result of the function.
Since the function takes some time and I would like to launch many in parallel I would like to avoid blocking the execution while waiting for the function to return.
I tried using aiobotocore but that is only supporting basic 's3' service functionalities.
The best way to solve this (in humble opinion) would be to use the AWS API Gateway service to invoke the lambda function through a GET/POST request that can be easily handled using aiohttp.
Nevertheless I don't manage to make it work.
I added to the EC2 IAM role the policy "AmazonAPIGatewayInvokeFullAccess" but every time I try to:
import requests
r = requests.get('https://url_to_api_gateway_for_function')
I get a forbidden response <Response [403]>.
I created the API Gateway using directly the trigger in the lambda function.
I also tried to edit the API Gateway settings, by adding a post method to the function path and setting the "AWS_IAM" authentication and then deploying it as "prod" deployment...no luck. Still same forbidden response. When I test it through the "test screen on the API gateway, it works fine".
Any idea how to fix this? Am I missing some step?
I managed to solve my issue after some struggling.
The problem is that curl and python modules like python's requests do not sign the http requests with the IAM credentials of the EC2 machine where they are running. The http request to the AWS GATEWAY API must be signed using the AWS v4 signin protocol.
An example is here:
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
Luckily, to keep things simple, there are some helper modules like requests-aws-sign:
https://github.com/jmenga/requests-aws-sign
At the end the code could look something like:
import aiohttp
import asyncio
from requests_aws_sign import AWSV4Sign
from boto3 import session
session = session.Session()
credentials = session.get_credentials()
region = session.region_name or 'ap-southeast-2'
service = 'execute-api'
url = "get_it_from_api->stages->your_deployment->invoke_url"
auth=AWSV4Sign(credentials, region, service)
async def invoke_func(loop):
async with aiohttp.request('GET', url, auth=auth, loop=loop) as resp:
html = await resp.text()
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Hope this will save time to somebody else!
EDIT:
For the sake of completeness and helping others I have to say that the code above does not work due to the fact that requests_aws_sign is not compatible with aiohttp. I was getting some "auth field error".
I manged to solve it by using:
async with session.get(url, headers=update_headers()) as resp:
where update_headers() is a simple function that mimics what requests_aws_sign was doing to the headers (so that I can then set them directly to the request above using the header parameter).
It looks like this:
def update_headers(sim_id):
url = urlparse("get_it_from_api->stages->your_deployment->invoke_url")
path = url.path or '/'
querystring = ''
if url.query:
querystring = '?' + urlencode(parse_qs(url.query), doseq=True)
safe_url = url.scheme + '://' + url.netloc.split(':')[0] + path + querystring
request = AWSRequest(method='GET', url=safe_url)
SigV4Auth(credentials, service, region).add_auth(request)
return dict(request.headers.items())

migrating from boto2 to 3

I have this code that use boto2 that I need to port to boto3, and frankly I got a little lost in the boto3 docs:
connection = boto.connect_s3(host=hostname,
aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
is_secure=False,
calling_format=boto.s3.connection.OrdinaryCallingFormat())
s3_bucket = connection.get_bucket(bucket_name)
I also need to make this work with other object stores that aren't aws S3.
import boto3
s3 = boto3.client('s3', aws_access_key_id=access_key,
aws_secret_access_key=secret_key,
endpoint_url=hostname, use_ssl=False)
response = s3.get_bucket(Bucket=bucket_name)
client docs
s3 docs
boto3 and boto are incompatible. Most of the naming are NOT backward compatible.
You MUST read the boto3 documentation to recreate script. The good news is, Boto3 documentation is better than boto, though not superb (many tricky parameter example not provided) .
If you have some apps using some old function, you should create a wrapper code for it to make the switching transparent.
Thus, you instance any object store connection through wrapper, then instantiate various bucket usign different connector. Here is some idea.
#AWS
# object_wrapper is a your bucket wrapper that All the application willc all
from object_wrapper import object_bucket
from boto3lib.s3 import s3_connector
connector = s3_connector()
bucket = object_bucket(BucketName="xyz", Connector=connector)
# say you use boto2 to connect to Google object store
from object_wrapper import object_bucket
from boto2lib.s3 import s3_connector
connector = s3_connector()
bucket = object_bucket(BucketName="xyz", Connector=connector)
# say for Azure
from object_wrapper import object_bucket
from azure.storage.blob import BlockBlobService
connector = BlockBlobService(......)
bucket = object_bucket(BucketName="xyz", Connector=connector)

Resources