I'm trying to simply list all the files in an S3 bucket using Lambda
The code looks as follows:
var AWS = require('aws-sdk');
var s3 = new AWS.S3();
exports.handler = (event, context, callback) => {
s3.listObjectsV2({
Bucket: "bucketname",
}, function(err, data) {
console.log("DONE : " + err + " : " + data);
callback(null, 'Hello from Lambda');
});
};
Using the above, I never get the "DONE" printed at all. The log doesn't show any information except for the fact that it timed out.
Is there any troubleshooting I could do here? I would've thought that at least the error would've been shown in the "DONE" section.
Thanks to Michael above. The problem was that it was running inside a VPC. If I change it to No VPC, it works correctly. Your solution may be different if you require it to run in a VPC.
If you are running your code inside VPC make sure to create VPC Endpoint.
Here is the tutorial: https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
If you are running your code inside the VPC, Make sure VPC subnet and its routing table entry should be proper (routing : Dest= 0.0.0.0/0 and target = igw-xxxx). Also VPC endpoint routing must be added in order to communicate to s3 via endpoint.
In my case I have selected 2 different subnets, 1 is private and other is public. So it was working sometimes and sometimes not. I changed both subnets to private (having NAT gateway in route) and now it that worked without timeout error.
Related
I am trying to download a file from an EC2 instance and store it temporarily in the tmp folder inside AWS Lambda. This is what I have tried:
let Client = require('ssh2-sftp-client');
let sftp = new Client();
sftp.connect({
host: host,
username: user,
privateKey : fs.readFileSync(pemfile)
}).then(() => {
return sftp.get('/config/test.txt' , fs.createWriteStream('/tmp/test.txt'))
}).then(() => {
sftp.end();
}).catch(err => {
console.error(err.message);
});
The function runs without generating an error but nothing is written to the destination file. What am I doing wrong here and how could I debug this? Also is there a better way of doing this altogether?
This is not the cloud way to do it IMO. Create a S3 bucket, and create proper Lambda execution role for the lambda function to be able to read from the bucket. Also, create a role for the EC2 instance to be able to write to the same S3 bucket. Using S3 API from both sides, the lambda function and the EC2, should be enough to share the file.
Think about this approach: you decouple your solution from a VPC and region perspective. Also, since the lambda only needs to access S3, you save a ENI (elastic network interface) resources, so you are not using your VPC private ips. These are just advantages that may not care in your case, but it is good to be aware of them.
I recently discovered DynamoDB Local and started building it into my project for local development. I decided to go the docker image route (as opposed to the downloadable .jar file.
That being said I've gotten image up and running and have created a table and can successfully interact with the docker container via the aws cli. aws dynamodb list-tables --endpoint-url http://localhost:8042 successfully returns the table I created previously.
However, when I run my lambda function and set my aws config like so.
const axios = require('axios')
const cheerio = require('cheerio')
const randstring = require('randomstring')
const aws = require('aws-sdk')
const dynamodb = new aws.DynamoDB.DocumentClient()
exports.lambdaHandler = async (event, context) => {
let isLocal = process.env.AWS_SAM_LOCAL
if (isLocal) {
aws.config.update({
endpoint: new aws.Endpoint("http://localhost:8042")
})
}
(which I have confirmed is getting set) it actually writes to the table (with the same name of the local dynamodb instance) in the live AWS Webservice as opposed to the local container and table.
It's also worth mentioning I'm unable to connect to the local instance of DynamoDB with the AWS NoSQL Workbench tool even though it's configured to point to http://localhost:8042 as well...
Am I missing something? Any help would be greatly appreciated. I can provide any more information if I haven't already done so as well :D
Thanks.
SDK configuration changes, such as region or endpoint, do not retroactively apply to existing clients (regular DynamoDB client or a document client).
So, change the configuration first and then create your client object. Or simply pass the configuration options into the client constructor.
I am trying to manage direct file upload to S3 according to heroku recomendations
first one need to generate presigned URL at ones server
use this url in client to direct upload of image from browser to S3 bucket
and finally manage to works it locally.
but when I tried to deploy server on heroku it starts to fail with no reason or readable error. Just common error and strange message when I try to print it
what looks strange for me that presigned urls are completely different when I make call from local host or from heroku
response for localhost looks like this:
https://mybucket.s3.eu-west-1.amazonaws.com/5e3ec346d0b5af34ef9dfadf_avatar.png?AWSAccessKeyId=<AWSKeyIdHere>&Content-Encoding=base64&Content-Type=image%2Fpng&Expires=1581172437&Signature=xDJcRBiA%2FmQF1qKhBZrnhFXWdaM%3D
and response for heroku deployment looks like this:
https://mybucket.s3.u-west-1.amazonaws.com/5e3ee2bd1513b60017d85c6c_avatar.png?Content-Type=image%2Fpng&X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=<credentials-key-here>%2F20200208%2Fu-west-1%2Fs3%2Faws4_request&X-Amz-Date=20200208T163315Z&X-Amz-Expires=900&X-Amz-Signature=<someSignature>&X-Amz-SignedHeaders=content-encoding%3Bhost
server code is almost like in examples:
const Bucket = process.env.BUCKET_NAME
const region = process.env.BUCKET_REGION
AWS.config = new AWS.Config({
accessKeyId: process.env.S3_KEY,
secretAccessKey: process.env.S3_SECRET,
region,
logger: console
})
const s3 = new AWS.S3()
async function generatePutUrl(inputParams = {}) {
const params = { Bucket, ...inputParams }
const { Key } = inputParams
const putUrl = await s3.getSignedUrl('putObject', params)
const getUrl = generateGetUrlLocaly(Key)
return {putUrl, getUrl}
}
the only difference that I can imagine is SSL - I run local server VIA http and heroku goes over https by default...
but I don't understand how it may influence here.
I will appreciate any meaningful advises how to debug and fix it.
thank you.
It looks like that your bucket region is incorrect. Shouldn't it be eu-west-1 instead of u-west-1?
Please update your BUCKET_REGION in environment variables at Heroku Server settings from
u-west-1
to
eu-west-1
and restart the dynos. It may solve your problem.
I try to fetch data with my web-client (express-server) from my backend-service (also express-server). Locally it works fine, using environment variables to set the backend-service-url. But deployed on AWS, it won't let me fetch from my web client EC2 to my backend EC2.
I log my environment variable for the backend-service (comes from AWS SSM Paramter Store) and it logs the correct service-url for my backend EC2 instance.
But then it fails, because it calls 'GET host-url/service-url/endpoint' instead of 'GET service-url/endpoint'. Don't know, if this is an AWS or node.js/express problem.
That's how I call my backend:
async function callEndpoint(endpointUrl) {
console.log("Fetching to: " + endpointUrl)
const response = await fetch(endpointUrl, {
method: 'GET',
});
let data = await response.json();
return data;
The console.log prints out the correct value, but fetch however makes (I guess, but don't understand why) it a relative path, prefixing it with the host-url from my frontend EC2 instance IP/DNS.
Don't know, how relevant, but my servers running in Docker containers in an ECS cluster (each container on its own EC2 instance).
If you're not specifying the scheme in the URL, fetch assumes that the domain root should be applied to the URL.
fetch("external-service.domain.com/endpoint")
translates into
fetch("https://hostname/external-service.domain.com/endpoint")
Try adding https:// or the appropriate scheme to your URL.
Read more:
https://url.spec.whatwg.org/#url-writing
I've got a list of load balancer ARNs which need their certificates swapped from an old one which expires soon to a new one. I've written a script that does this successfully for non-ELBv2 LBs, but v2 types are causing me a headache since I can't seem to programmatically match a listener (or listeners) to a LB ARN. I know that once I have that listener ARN, I can use:
elbv2.modifyListener(params,(err, data) => {<snip>});
and expect a response, but getting to that point is eluding me. I've tried elbv2.describeLoadBalancers(), but that appears to require a listener ARN in the params.
So, how do I give AWS an LB ARN and get its associated listener ARN(s)?
You're looking for describeListeners(). The documentation says:
Describes the specified listeners or the listeners for the specified Application Load Balancer or Network Load Balancer. You must specify either a load balancer or one or more listeners.
So if you call it with just LoadBalancerArn, you should get a list of listeners attached to that load balancer.
var AWS = require('aws-sdk');
var elbv2 = new AWS.ELBv2();
var params = {
LoadBalancerArn: 'arn:...'
};
elbv2.describeListeners(params, function(err, data) {
if (err) console.log(err, err.stack); // an error occurred
else console.log(data); // successful response
});