aws lambda#edge + Cloudfront ERROR () - node.js

I'm using lambda#edge + cloudfront to do some image resizes etc. My origin is S3 bucket.
ISSUE: When I try to call for an object inside s3 via cloudfront over browser I get the above error (picture). it even happens when I use just a test function(below).
how I call/query it: My s3 is set as origin, so I just use my cloudfront Domain Name d5hbjkkm17mxgop.cloudfront.net and add s3 path /my_folder/myimage.jpg
browser url used: d5hbjkkm17mxgop.cloudfront.net/my_folder/myimage.jpg
exports.handler = (event, context, callback) => {
var request = event.Records[0].cf.request;
console.log(event);
console.log("\n\n\n");
console.log(request);
callback(null, request);
};
I'm pretty sure that request is an object - have no idea why is this happening.
If testing in aws console all works - so it has to be an cloudfront/lambda interface error - because lambda is not even invoked (no new log entrie being generated).
I also have an access error from cloudfront:
2018-01-08 12:40:20 CDG50 855 62.65.189.38 GET d3h4fd56s4fs65d4f6somxgyh.cloudfront.net /nv1_andrej_fake_space/98f741e0b87877c607a6ad0d2b8af7f3ba2f949d7788b07a9e89453043369196 502 - Mozilla/5.0%2520(X11;%2520Ubuntu;%2520Linux%2520x86_64;%2520rv:57.0)%2520Gecko/20100101%2520Firefox/57.0 - - LambdaValidationError usnOquwt7A0R7JkFD3H6biZp21dqnWwC5szU6tHxKxcHv5ZAU_g6cg== d3hb8km1omxgyh.cloudfront.net https 260 0.346 - TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 LambdaValidationError HTTP/2.0
Any ideas?
EDITED: semicolon

Do not forget to Publish new version of your Lambda. It is not sufficient to save it. Lambda that was last published is the one actually deployed, however you may have different code in aws console window.
EDIT: another gotcha - do not forget to change your function version in CloudFront settings. You have to select desired CF distribution that is bind to your Lambda. Choose that CF distribution, then go to behaviors, choose edit behaviors. Scroll down and last entry is Lambda Function Associations (see pic below)
Last number in Lambda Function ARN is the version number of deployed lambda.

Related

AWS Lambda: using CloudwatchEvent.disableRule() doesn't work within Lambda but works in local

I'm trying to disable the Cloudwatch Event Rule that is triggering a Lambda, within the Lambda itself, so that it doesn't keep running when it's not necessary to do so.
I have a separate Lambda that is calling enableRule to enable the rule, which seems to work fine. The rule is not associated with the function that is doing enableRule. EDIT: Turns out the EnableRule doesn't work in Lambda either.
However, this Lambda that's supposed to disable it isn't working.
Both functions already have Cloudwatch and CloudwatchEvent Full Access rights in their roles.
var cloudwatchEvents = new AWS.CloudWatchEvents();
var params = {
Name: cloudwatchEventRuleName
}
console.log("this message will show up");
cloudwatchEvents.disableRule(params, function (err, data) {
console.log("but this message never appears when it runs via Lambda for some reason!")
if (err)
console.log(err,err.stack);
else
console.log(data);
});
console.log("and this message will also show up");
That line where it was supposed to call the middle console.log doesn't work at all if I run it through Lambda. It's working perfectly in my local, however.
I even printed the cloudwatchEventRuleName to check if I have any typos, but the function name seems right. It's like the function is just outright skipping the disableRule function altogether for whatever reason.
So apparently, years later, setting up VPCs still haunt me.
Yep, it's a VPC configuration thing.
I could swear that the Subnet that the Lambda function was using had a route table that pointed to a properly set up Network Interface with a NAT Gateway Instance.
Out of curiosity, I tried making the route table entry of 0.0.0.0/0 point to the instance (i-#####) rather than the network interface (eni-######).
After I pressed Save in the Route Table, it automatically transformed into eni-######, similar to what I already had it set up...
Except this time the function actually started working now.
I have no idea what kind of magic AWS did so that associating an instance =/= associating to a network interface even though the former transformed into the same ID anyway, but whatever.
So for anyone encountering this same problem: always remember to double-check if your function actually has access to the internet to use the AWS APIs.
EDIT: Also another thing: I had to make sure that enableRule and disableRule were both awaited, as for some reason the AWS requests can in fact not be sent properly if the handler already returned something before the request was completed. So we turned into a promise just so we can await it:
try { await cloudwatchEvents.disableRule(params).promise().then((result) => console.log(result)) }
catch (error) { console.log("Error when disabling rule!", error); }

Why am I unable to set Amazon S3 as a trigger for my Serverless Lambda Function?

I am attempting to set a NodeJS Lambda function to be triggered when an image is uploaded to an Amazon S3 bucket. I have seen multiple tutorials and have the yml file set up as shown. Below is the YML config file:
functions:
image-read:
handler: handler.imageRead
events:
- s3:
bucket: <bucket-name-here>
event: s3:ObjectCreated:*
Is there something I am missing for the configuration? Is there something I need to do in an IAM role to set this up properly?
The YAML that you have here looks good but there may be some other problems.
Just to get you started:
are you deploying the function using the right credentials? (I've seen it many times that people are deploying in some other account etc. than they think - verify in the web console that it's there)
can you invoke the function in some other way? (from the serverless command line, using http trigger etc.)
do you see anything in the logs of that function? (add console.log statements to see if anything is being run)
do you see the trigger installed in the web console?
can you add trigger manually on the web console?
Try to add a simple function that would only print some logs when it is run and try to add a trigger for that function manually. If it works then try to do the same with the serverless command line but start with a simple function with just one log statement and if it works then go from there.
See also this post for more hints - S3 trigger is not registered after deployment:
https://forum.serverless.com/t/s3-trigger-is-not-registered-after-deployment/1858

How to invoke an AWS Lambda function from EC2 instances with python asyncio

I recently posted a question about How to allow invoking an AWS Lambda function only from EC2 instances inside a VPC.
I managed to get it working by attaching an IAM role with an "AWS lambda role" policy to the EC2 instances and now I can invoke the lambda function using boto3.
Now, I would like to make the call to the lambda function asynchronously using the asyncio await syntax. I read that the lambda function offers an asynchronous version by setting InvokeType='Event', but that actually makes the call return immediately without getting the result of the function.
Since the function takes some time and I would like to launch many in parallel I would like to avoid blocking the execution while waiting for the function to return.
I tried using aiobotocore but that is only supporting basic 's3' service functionalities.
The best way to solve this (in humble opinion) would be to use the AWS API Gateway service to invoke the lambda function through a GET/POST request that can be easily handled using aiohttp.
Nevertheless I don't manage to make it work.
I added to the EC2 IAM role the policy "AmazonAPIGatewayInvokeFullAccess" but every time I try to:
import requests
r = requests.get('https://url_to_api_gateway_for_function')
I get a forbidden response <Response [403]>.
I created the API Gateway using directly the trigger in the lambda function.
I also tried to edit the API Gateway settings, by adding a post method to the function path and setting the "AWS_IAM" authentication and then deploying it as "prod" deployment...no luck. Still same forbidden response. When I test it through the "test screen on the API gateway, it works fine".
Any idea how to fix this? Am I missing some step?
I managed to solve my issue after some struggling.
The problem is that curl and python modules like python's requests do not sign the http requests with the IAM credentials of the EC2 machine where they are running. The http request to the AWS GATEWAY API must be signed using the AWS v4 signin protocol.
An example is here:
http://docs.aws.amazon.com/general/latest/gr/sigv4-signed-request-examples.html
Luckily, to keep things simple, there are some helper modules like requests-aws-sign:
https://github.com/jmenga/requests-aws-sign
At the end the code could look something like:
import aiohttp
import asyncio
from requests_aws_sign import AWSV4Sign
from boto3 import session
session = session.Session()
credentials = session.get_credentials()
region = session.region_name or 'ap-southeast-2'
service = 'execute-api'
url = "get_it_from_api->stages->your_deployment->invoke_url"
auth=AWSV4Sign(credentials, region, service)
async def invoke_func(loop):
async with aiohttp.request('GET', url, auth=auth, loop=loop) as resp:
html = await resp.text()
print(html)
loop = asyncio.get_event_loop()
loop.run_until_complete(main(loop))
Hope this will save time to somebody else!
EDIT:
For the sake of completeness and helping others I have to say that the code above does not work due to the fact that requests_aws_sign is not compatible with aiohttp. I was getting some "auth field error".
I manged to solve it by using:
async with session.get(url, headers=update_headers()) as resp:
where update_headers() is a simple function that mimics what requests_aws_sign was doing to the headers (so that I can then set them directly to the request above using the header parameter).
It looks like this:
def update_headers(sim_id):
url = urlparse("get_it_from_api->stages->your_deployment->invoke_url")
path = url.path or '/'
querystring = ''
if url.query:
querystring = '?' + urlencode(parse_qs(url.query), doseq=True)
safe_url = url.scheme + '://' + url.netloc.split(':')[0] + path + querystring
request = AWSRequest(method='GET', url=safe_url)
SigV4Auth(credentials, service, region).add_auth(request)
return dict(request.headers.items())

PermanentRedirect while generating pre signed url

I am having an issue while creating a pre signed url from aws s3 using aws-sdk in nodejs. It gives me PermanentRedirect The bucket you are attempting to access must be addressed using the specified endpoint.
const s3 = new AWS.S3()
AWS.config.update({accessKeyId: 'test123', secretAccessKey: 'test123'})
AWS.config.update({region: 'us-east-1'})
const myBucket = 'test-bucket'
const myKey = 'test.jpg'
const signedUrlExpireSeconds = 60 * 60
const url = s3.getSignedUrl('getObject', {
Bucket: myBucket,
Key: myKey,
Expires: signedUrlExpireSeconds
})
console.log(url)
How I can remove this error to get pre signed url working. Also I need to know what is a purpose of Key.
1st - what is your region of the bucket? S3 is global service yet each bucket has region, while creating the bucket you must select it.
2nd - when working with S3 not in N.Virginia region there could be situations when internal aws SSL/DNS is not in sync yet. I had this issue multiple times, can't find exact docs related to this but issue is from nature of redirects, not found or no access. Then after 4-12h it starts to just work. What i happen to dig out about these issues is something related to internal AWS SSL/DNS related to S3 buckets that are not in n.virginia region. So could be it.
3rd - If you re-created buckets multiple times and re-using same name. Bucket name is global, even if bucket is regional. So could be again related to 2nd scenarios when previously within last 24h bucket was actually on different region and now AWS's internal DNS/SSL haven't synced yet.
p.s. Key is object's key, any object inside bucket has key. On aws console you can navigate "key" which looks like path to file, but it's not a path to file. S3 has no concept of directories like hard drives. Any path to file is a key of object. AWS console just splits object's key by / and displays as directories to have better UX while navigating the UI.

Amazon S3 PUT throws "SignatureDoesNotMatch"

This AWS security stuff is driving me nuts. I'm trying to upload some binary files from a node app using knox. I keep getting the infamous SignatureDoesNotMatch error with my key/secret combination. I traced it down to this: with e.g. Transmit, I can access the bucket by connecting to s3.amazonaws.com, but I cannot access it via the virtual subdomain mybucket.s3.amazonaws.com. (When I try to access the bucket with the s3.amazonaws.com/mybucket syntax, I get an error saying that only the subdomain style is allowed.)
I have tried setting the bucket policy to explicitly allow PUT from the respective user, but that had no effect. Can anyone please shed some light on how I can enable uploading of files from one specific AWS user?
After a lot of trial and error, I narrowed it down to a couple of issues. I'm not entirely sure which one ultimately fixed it, but here are a few things you might want to try:
make sure you are setting the right datacenter. In my case, this looked like this:
knox.createClient({
key: this.config.key
, secret: this.config.secret
, bucket: this.config.bucket
, region: 'us-west-2' // cause my bucket is supposed to be in oregon
});
Check your PUT headers. In my case, the Content-Type was accidentally set to undef which caused issues:
var headers = {
'x-amz-acl': 'public-read' // if you want anyone to be able to download the file
};
if (filesize) headers['Content-Length'] = filesize;
if (mime) headers['Content-Type'] = mime;

Resources