I have nodejs/express app from which I want to connect to AWS S3.
I do have a temporary approach to make connection,
environment file
aws_access_key_id=XXX
aws_secret_access_key=XXXX/
aws_session_token=xxxxxxxxxxxxxxxxxxxxxxxxxx
S3-connection-service.js
const AWS = require("aws-sdk");
AWS.config.update({
accessKeyId: `${process.env.aws_access_key_id}`,
secretAccessKey: `${process.env.aws_secret_access_key}`,
sessionToken: `${process.env.aws_session_token}`,
region: `${process.env.LOCAL_AWS_REGION}`
});
const S3 = new AWS.S3();
module.exports = {
listBucketContent: (filePath) =>
new Promise((resolve, reject) => {
const params = { Bucket: bucketName, Prefix: filePath };
S3.listObjects(params, (err, objects) => {
if (err) {
reject(err);
} else {
resolve(objects);
}
});
}),
....
....
}
controller.js
const fetchFile = require("../../S3-connection-service.js");
const AWSFolder = await fetchFile.listBucketContent(filePath);
Fine it's works and I'm able to access S3 bucket files and play with it.
PROBLEM
The problem is connection is not persistent. Since, I use session_token, connection remains alive for sometime and again after sometime new tokens will be generated, I have to copy-paste them in env file and re-run the node app.
I really have no idea how can I make connection persistent ?
Where to store AWS confidential/secrets and how to use them to connect to S3 so connection remains alive ?
Just remove
AWS.config.update({
accessKeyId: `${process.env.aws_access_key_id}`,
secretAccessKey: `${process.env.aws_secret_access_key}`,
sessionToken: `${process.env.aws_session_token}`,
region: `${process.env.LOCAL_AWS_REGION}`
});
code block from lambda source in file S3-connection-service.js
Attach a role to lambda function with proper permissions. You will have same functionally.
For local development.
You can set environment variable before testing your application.
export AWS_ACCESS_KEY_ID=AKIAIOSFODNN7EXAMPLE
export AWS_SECRET_ACCESS_KEY=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
export AWS_DEFAULT_REGION=us-west-2
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
If you are using any IDE you can set these environment variables on it.
If you are testing from cli
$ AWS_ACCESS_KEY_ID=EXAMPLE AWS_SECRET_ACCESS_KEY=EXAMPLEKEY AWS_DEFAULT_REGION=us-west-2 npm start
connect to S3 so connection remains alive ?
You can't make one request to S3 and keep it alive forever.
These are your options:
Add a try/catch statement inside your code to handle credentials expired error. Then, generate new credentials and re-initialize the S3 client.
Instead of using a Role, use a User. (IAM Identities). User credentials can be valid forever. You won't need to update the credentials in this case.
Do not provide the credentials to AWS.config.update like you are doing right now. If you don't provide the credentials, the AWS client will try to read them from your ~/.aws/credentials file automatically. If you create a script to update them every hour (ex: a cronjob), then your credentials will be up-to-date at all times.
I have been using AWS SDK v2.846.0 for create a SSL certificate for diferent domains, but when I do a request the I get this error:
"Inaccessible host: acm.undefined.amazonaws.com'. This service may not be available in the us-east-1' region."
Someone know what I could do?
I am using this documentation: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/ACM.html#requestCertificate-property
It looks like the region is undefined in your ACM credentials object.
To supply options (including access key, secret key, and region), you can either pass them into the service constructor e.g.
options = { secretAccessKey: skid, accessKeyId: akid, region: 'us-east-2' };
acm = new AWS.ACM(options);
Or you can update the SDK's config object but you must do this before you create your service object, for example:
options = { secretAccessKey: skid, accessKeyId: akid, region: 'us-east-2' };
AWS.config.update(options);
acm = new AWS.ACM();
When I am trying to load AWS credentials in my project it gives back an error.
When using credentials in plain text everything works good but when I am trying to use environment variables it's not working.
Error message. :
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
Here is my tried code :
const AWS = require('aws-sdk');
const SESConfig = {
apiVersion: "2010-12-01",
accessKeyId: process.env.AWS_SECRET_KEY,
accessSecretKey: process.env.AWS_SECRET_KEY,
region: "us-east-1"
}
AWS.config.update(SESConfig);
var sns = new AWS.SNS()
var sns = new AWS.SNS();
function sendSMS(to_number, message, cb) {
sns.publish({
Message: message,
Subject: 'Admin',
PhoneNumber:to_number
}, cb);
}
// Example
const PhoneNumberArray = ['any mobile number']
PhoneNumberArray.forEach(number => {
sendSMS(number, "Lorem Ipsum is simply dummy text of the printing and typesetting industry.", (err, result)=>{
console.log("RESULTS: ",err,result)
})
})
By default, the SDK detects AWS credentials set in your environment and uses them to sign requests to AWS. That way you don’t need to manage credentials in your applications.
Unix:
$ export AWS_ACCESS_KEY_ID="your_key_id"
$ export AWS_SECRET_ACCESS_KEY="your_secret_key"
Windows:
SET AWS_ACCESS_KEY_ID="your_key_id"
SET AWS_SECRET_ACCESS_KEY="your_secret_key"
Powershell:
$Env:AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY_ID"
$Env:AWS_SECRET_ACCESS_KEY="YOUR_SECRET_ACCESS_KEY"
you can also add $ export AWS_SESSION_TOKEN='your_token' (optional)
See aws-sdk for more details.
Otherwise you can create a ~/.aws/credentials file and add:
[default]
aws_access_key_id = <YOUR_ACCESS_KEY_ID>
aws_secret_access_key = <YOUR_SECRET_ACCESS_KEY>
See aws for more details.
I noticed that you are setting your accessKeyId and secretAccessKey to the same environment variable.
const SESConfig = {
apiVersion: "2010-12-01",
accessKeyId: process.env.AWS_SECRET_KEY, // should be: process.env.AWS_ACCESS_ID
secretAccessKey: process.env.AWS_SECRET_KEY,
region: "us-east-1"
}
These are supplied as separate values by aws and should be represented by two separate environment variables.
Maybe this is your issue?
You can try create an AWS_PROFILE with the credentials if you have the AWS CLI installed.
$ aws configure --profile testuser
AWS Access Key ID [None]: 1234
AWS Secret Access Key [None]: 1234
Default region name [None]: us-east-1
Default output format [None]: text
After that you can set the AWS_PROFILE as environment variable.
Linux / Mac
export AWS_PROFILE=testuser
Windows
setx AWS_PROFILE testuser
After that you should be able to run your program and AWS will get the credentials from your profile. This way you don't have to save your credentials in .ENV. If you do, remember to add it in .gitignore.
https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html
Install dotenv
npm install dotenv --save
Create a .env file and add your Variables
AWS_ACCESS_KEY=1234567890
AWS_SECRET_KEY=XXXXXXXXXXXXXXXXXXX
Load dotenv in your project
require('dotenv').config();
Complete code
require('dotenv').config();
const AWS = require('aws-sdk');
const SESConfig = {
apiVersion: "2010-12-01",
accessKeyId: process.env.AWS_ACCESS_KEY,
accessSecretKey: process.env.AWS_SECRET_KEY,
region: "us-east-1"
}
AWS.config.update(SESConfig);
var sns = new AWS.SNS();
function sendSMS(to_number, message, cb) {
sns.publish({
Message: message,
Subject: 'Admin',
PhoneNumber:to_number
}, cb);
}
const PhoneNumberArray = ['any mobile number']
PhoneNumberArray.forEach(number => {
sendSMS(number, "Lorem Ipsum is simply dummy text of the printing and typesetting industry.", (err, result)=>{
console.log("RESULTS: ",err,result)
})
})
I was able to fix this problem by specifying an apiVersion
AWS.config.update({
region: 'MY_REGION',
apiVersion: 'latest',
credentials: {
accessKeyId: 'MY_ACCESS_KEY',
secretAccessKey: 'MY_SECRET_KEY'
}
})
worked after i followed the exact names from aws guide for the env vars
https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/loading-node-credentials-environment.html
AWS_ACCESS_KEY_ID
AWS_SECRET_ACCESS_KEY
AWS_SESSION_TOKEN (Optional)
Note that the variable names in ~/.aws/credentials are case sensitive. That was what caused my problem
You can simply load the credentials through a dedicated config.json file.
{
"accessKeyId": "<YOUR_ACCESS_KEY_ID>",
"secretAccessKey": "<YOUR_SECRET_ACCESS_KEY>",
"region": "eu-west-3"
}
Then use the AWS load command
AWS.config.loadFromPath('./config.json');
In this case you wouldn't need to update the AWS config AWS.config.update(...); as it is done right from the gecko.
Note that:
Loading credentials from a JSON document is not supported in browser scripts.
Source # https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/loading-node-credentials-json-file.html
I have stored all the credentials in my config file itself. For windows, I got it solved by adding a Environment Variable to my nodejs application in .env.local
AWS_SDK_LOAD_CONFIG=1
I came across a similar problem, so I watched a few videos and read a bunch of documentation, In dotenv file try creating the IAM user that you wish to give permission to access the accountAWS_PROFILE="exampleProfile" this should be the same user that you got your Access key and secret from, then require so it should look something like this.
const SESConfig = {
apiVersion: "2010-12-01",
profile:process.env.AWS_PROFILE,
accessKeyId: process.env.AWS_ACCESS_KEY,
accessSecretKey: process.env.AWS_SECRET_KEY,
region: "us-east-1"
}
I switched to a prod role according to this
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_use_switch-role-cli.html#switch-role-cli-scenario-prod-env
where there wasn't a 'prod' entry in my ~/.aws/credentials file
I got my sdk calls in my script working by calling
export AWS_SDK_LOAD_CONFIG=1
before runningg it
I encountered the same issue but in my case, I was forced to authenticate through GSuite. That's because, in my work environment, GSuite (from Google) is the Single Sign-On (SSO) provider.
I noticed that while a CLI command like:
aws s3 ls
worked as expected, the node.js code threw the error discussed in this article.
There are two solutions that work in my case:
Add the relevant lines into the code from the sample below:
const AWS = require('aws-sdk');
const s3 = new AWS.S3();
const credentials = new AWS.SharedIniFileCredentials({ profile: '<your_profile_name>' });
AWS.config.credentials = credentials;
AWS.config.region = '<your_region>';
const s3 = new AWS.S3({ region: '<your_region>' });
(async () => {
await s3.putObject({
Body: 'Hello World',
Bucket: "<your_bucket_name>",
Key: "my-file.txt"
}, function (err, data){
if (err) console.log(err, err.stack);
else console.log(data);
});
})()
The second solution that also worked was using the proper environment variable.
On my macOS, I had set the environment variable incorrectly as:
AWS_DEFAULT_PROFILE=<your_profile>
But when I set the below environment variable, my code worked like a charm:
AWS_PROFILE=<your_profile>
Refer to this article by AWS on environment variables:
https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-envvars.html
Hope my solution helps.
Different situation I was in, but that was leading to the same error.
I was using a snippet from the example code for a Pinpoint Push Notification Lambda, and it included these lines:
// Specify that you're using a shared credentials file, and specify the
// IAM profile to use.
var credentials = new AWS.SharedIniFileCredentials({ profile: '...' });
AWS.config.credentials = credentials;
I was using this code in my own Amplify CLI generated PushNotification Function. There were no issues when working with the Function on its own.
When I tried to call the PushNotification Function from another resource, I got that same error:
Missing credentials in config, if using AWS_CONFIG_FILE, set AWS_SDK_LOAD_CONFIG=1
The solution for me was to simply remove the SharedIniFileCredentials code from the function entirely.
I presume this works because the Amplify environment is all managed, so that explicit AWS.config.credentials was redundant, as well as broken when running in certain scenarios.
Hope this helps anybody who is having a similar problem with calling Functions from other Function in an Amplify project! – I know that's not best practice, as discussed in this other stack overflow, but it works
I need to use multiple AWS credentials for different services like s3, SNS....
var awsS3 = require('aws-sdk');
var awsSes = require('aws-sdk');
awsS3.config.update({
region: config.awsRegion,
accessKeyId: config.sesAccessKeyId,
secretAccessKey: config.sesSecretAccessKey
});
awsSes.config.update({
region: config.s3Region,
accessKeyId: config.s3AccessKeyId,
secretAccessKey: config.s3SecretAccessKey
});
But above code is not working.
How to configure multiple accessKeyIds, secretAccessKeys for different services?
You can pass config while creating service objects. Following is what you are looking for
const s3 = new aws.S3({ /* s3 config */ });
const ses = new aws.SES({ /* ses config */ });
I would think you want to control it with policies instead of having multiple credentials. Use a single credentials/role, then custom policies on what you want to allows and deny for each service. Then your application can use that role/credentials and will be allows or restricted base on the policies.
If use this code within a Lambda which complies with everything I read on stackoverflow and on the AWS SDK documentation.
However, it neither returns anything nor throws an error. The code is simply stuck on s3.getObject(params).promise() so the lambda function runs on a timeout, even after more then 30 seconds. The file i try to fetch is actually 25kb.
Any idea why this happens?
var AWS = require('aws-sdk');
var s3 = new AWS.S3({httpOptions: {timeout: 3000}});
async function getObject(bucket, objectKey) {
try {
const params = {
Bucket: bucket,
Key: objectKey
}
console.log("Trying to fetch " + objectKey + " from bucket " + bucket)
const data = await s3.getObject(params).promise()
console.log("Done loading image from S3")
return data.Body.toString('utf-8')
} catch (e) {
console.log("error loading from S3")
throw new Error(`Could not retrieve file from S3: ${e.message}`)
}
}
When testing the function, i receive the following timeout.
START RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124 Version: $LATEST
2019-03-19T07:51:30.225Z 97782eac-019b-4d46-9e1e-3dc36ad87124 Trying to fetch public-images/low/ZARGES_41137_PROD_TECH_ST_LI.jpg from bucket zarges-pimdata-test
2019-03-19T07:51:54.979Z 97782eac-019b-4d46-9e1e-3dc36ad87124 error loading from S3
2019-03-19T07:51:54.981Z 97782eac-019b-4d46-9e1e-3dc36ad87124 {"errorMessage":"Could not retrieve file from S3: Connection timed out after 3000ms","errorType":"Error","stackTrace":["getObject (/var/task/index.js:430:15)","","process._tickDomainCallback (internal/process/next_tick.js:228:7)"]}
END RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124
REPORT RequestId: 97782eac-019b-4d46-9e1e-3dc36ad87124 Duration: 24876.90 ms
Billed Duration: 24900 ms Memory Size: 512 MB Max Memory Used: 120 MB
The image I am fetching is actually public available:
https://s3.eu-central-1.amazonaws.com/zarges-pimdata-test/public-images/low/ZARGES_41137_PROD_TECH_ST_LI.jpg
const data = (await (s3.getObject(params).promise())).Body.toString('utf-8')
If your Lambda function is associated with a VPC it loses internet access which is required to access S3. However, instead of following the Lambda warning that says "Associate a NAT" etc, you can create an S3 endpoint in the VPC > Endpoints settings, and your Lambda function will work as expected, with no need to manually set up Internet access for your VPC.
https://aws.amazon.com/blogs/aws/new-vpc-endpoint-for-amazon-s3/
Default timeout of AWS SDK is 120000 ms. If your lambda's timeout is shorter then that, you will never receive the actual error.
Either extend your AWS timeout
var AWS = require('aws-sdk');
var s3 = new AWS.S3({httpOptions: {timeout: 3000}});
or extend the timout of your lambda.
This issue is definitely related to connection.
Check out your VPC settings as it is likely blocking the Lambda connection to the Internet (AWS managed services as S3 are accessible only via Internet).
If you are using localstack, make sure SSL is false and s3ForcePathStyle is true.
That was my problem
AWS.S3({endpoint: '0.0.0.0:4572', sslEnabled: false, s3ForcePathStyle:true})
More details here
Are you sure you are providing your accessKeyId and secretAccessKey? I was having timeouts with no error message until I added them to the config:
AWS.config.update({ signatureVersion: 'v4', region: "us-east-1",
accessKeyId: secret.accessKeyID,
secretAccessKey: secret.secretAccessKey });