I'm making a simple React app to access RDS data via DescribeDBInstances API. I want to allow public access, so I'm using Cognito with Unauthenticated access enabled.
I have the following policy attached to the provided UnAuth role, yet I'm still getting the following error when trying to use the RDS API from JavaScript (nodejs):
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "rds:DescribeDBInstances",
"Resource": "*"
}
]
}
AccessDenied: User: arn:aws:sts::(account):assumed-role/Cognito_TestUnauth_Role/CognitoIdentityCredentials is not authorized to perform: rds:DescribeDBInstances on resource: arn:aws:rds:us-east-1:(account):db:*
I redacted my account ID.
Also this default policy is attached:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"mobileanalytics:PutEvents",
"cognito-sync:*"
],
"Resource": "*"
}
]
}
Here's my calling code:
import { RDSClient, DescribeDBInstancesCommand } from "#aws-sdk/client-rds";
import { CognitoIdentityClient } from "#aws-sdk/client-cognito-identity";
import { fromCognitoIdentityPool } from "#aws-sdk/credential-provider-cognito-identity";
// see https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-rds/index.html
export default async function getDbInstances() {
const region = "us-east-1";
const client = new RDSClient({
region,
credentials: fromCognitoIdentityPool({
client: new CognitoIdentityClient({ region }),
identityPoolId: "(my identity pool ID)",
})
});
const command = new DescribeDBInstancesCommand({});
return await client.send(command).DBInstances;
}
I'm going a bit crazy here, it seems everything is set up correctly. What is missing?
I found the answer inside the IAM Roles documentation for Cognito: https://docs.aws.amazon.com/cognito/latest/developerguide/iam-roles.html (see "Access Policies" section)
Enhanced authentication is recommended for Cognito and enabled by default, but since it uses the GetCredentialForIdentity API under the hood, access is limited to certain AWS services regardless of IAM policy (RDS isn't an allowed service). I didn't see any way to override this limitation.
The solution is to switch to basic authentication (you have to enable it first in the Cognito identity pool settings). Here's my working nodejs code to use basic auth and then fetch the RDS instances:
import { RDSClient, DescribeDBInstancesCommand } from "#aws-sdk/client-rds";
import {
CognitoIdentityClient,
GetIdCommand ,
GetOpenIdTokenCommand
} from "#aws-sdk/client-cognito-identity";
import { getDefaultRoleAssumerWithWebIdentity } from "#aws-sdk/client-sts";
import { fromWebToken } from "#aws-sdk/credential-provider-web-identity";
const region = "us-east-1";
const cognitoClient = new CognitoIdentityClient({ region })
// see https://docs.aws.amazon.com/AWSJavaScriptSDK/v3/latest/clients/client-rds/index.html
export default async function getDbInstances() {
const { Token, IdentityId } = await getTokenUsingBasicFlow();
const client = new RDSClient({
region,
credentials: fromWebToken({
roleArn: "arn:aws:iam::(account id):role/Cognito_RDSDataAppPoolUnauth_Role",
webIdentityToken: Token,
roleSessionName: IdentityId.substring(IdentityId.indexOf(":") + 1),
roleAssumerWithWebIdentity: getDefaultRoleAssumerWithWebIdentity()
})
});
const command = new DescribeDBInstancesCommand({});
return (await client.send(command)).DBInstances;
}
async function getTokenUsingBasicFlow() {
const getIdCommand = new GetIdCommand({ IdentityPoolId: "us-east-1:(identity pool id)" });
const id = (await cognitoClient.send(getIdCommand)).IdentityId;
const getOpenIdTokenCommand = new GetOpenIdTokenCommand({ IdentityId: id });
return await cognitoClient.send(getOpenIdTokenCommand);
}
Here's the documentation for the basic authentication flow vs enhanced that I followed to write my implementation: https://docs.aws.amazon.com/cognito/latest/developerguide/authentication-flow.html
Related
I was going to create a private-scoped endpoint on my Express.js backend API to check some custom permissions. I'm using RBAC (Role-Based Access Control) in auth0 with the 'express-oauth2-jwt-bearer' (https://www.npmjs.com/package/express-oauth2-jwt-bearer) package. I constantly get an Insufficient Scope Error when I try to access that endpoint.
Express Code,
const express = require('express');
const app = express();
const { auth, requiredScopes } = require('express-oauth2-jwt-bearer');
const checkJwt = auth();
const requiredScopes = requiredScopes("getAll:student", { customScopeKey: "permissions" });
app.get(
"/student/getAll",
checkJwt,
requiredScopes,
res.json({
message: 'Here is the all Student detail list'
});
);
Decoded JSON web token details,
[1]: https://i.stack.imgur.com/EtZfU.jpg
{
"iss": "https://*************.us.auth0.com/",
"sub": "auth0|******************",
"aud": [
"http://*************",
"https://*************.us.auth0.com/userinfo"
],
"iat": 1657083984,
"exp": 1657170384,
"azp": "***********************",
"scope": "openid profile email",
"permissions": [
"delete:student",
"getAll:student",
"search:student",
"update:student"
]
}
But if I use const requiredScopes = requiredScopes("openid profile email", { customScopeKey: "permissions" }); instead of const requiredScopes = requiredScopes("getAll:student", { customScopeKey: "permissions" }); it works. I think the problem is permissions are not check against custom scope key but with default scope key. Anyone can help to fix it ?
I'm having the same issue - it seems that requiredScopes is broken. What I did instead was use claimCheck from the same library:
const checkClaims = claimCheck(claims => {
return claims.permissions.includes('your:permission')
});
I'm trying to connect to the Amazon Selling Partners API (SP-API) using the node.js library and am coming across an extremely odd error which seems to be telling me I can't assume my own role?
CustomError: User: arn:aws:iam::11111:user/bob is not authorized to perform: sts:AssumeRole on resource: arn:aws:iam::11111:user/bob
I'm fairly new to AWS but I'm pretty sure that this inline policy for the user should be sufficient for what I'm trying to do, I've even made it work for all resources rather than just the SellingPartners role I'd previously created:
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "*"
}
]
}
Here's my full code in case it helps:
const SellingPartnerAPI = require('amazon-sp-api');
(async() => {
try {
let sellingPartner = new SellingPartnerAPI({
region:'na', // The region to use for the SP-API endpoints ("eu", "na" or "fe")
refresh_token:'xxxxxx', // The refresh token of your app user
credentials:{
SELLING_PARTNER_APP_CLIENT_ID:'xxxxx',
SELLING_PARTNER_APP_CLIENT_SECRET:'xxxxx',
AWS_ACCESS_KEY_ID:'xxxx',
AWS_SECRET_ACCESS_KEY:'xxxxx',
AWS_SELLING_PARTNER_ROLE:'arn:aws:iam::11111:user/bob'
}
});
let res = await sellingPartner.callAPI({
operation:'getOrders',
endpoint:'orders'
});
console.log(res);
} catch(e) {
console.log(e);
}
})();
The ARN arn:aws:iam::11111:user/bob describes a User not a role.
It should probably be something like arn:aws:iam::11111:role/your-role-name if the client expects a Role ARN.
I'm trying to store some images using AWS S3. Everything is running smoothly until I started getting some 400s on PUTting images on URLs I got from s3.getSignedUrl. At that time my code looked like this:
const s3 = new AWS.S3({
accessKeyId,
secretAccessKey
});
const imageRouter = express.Router();
imageRouter.post('/upload', (req, res) => {
const type = req.body.ContentType;
const Key = `${req.session.user.id}/${uuid()}.${type}`;
s3.getSignedUrl(
'putObject',
{
Bucket: 'cms-bucket-06',
ContentType: type,
Key
},
(err, url) => {
console.log('URL ', url); res.send({ Key, url });
}
);
});
I followed link from error and I found out that "The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.".
So I did. Like that:
const s3 = new AWS.S3({signatureVersion: 'v4'});
But now I get no URL in my callback function. It's undefined. What am I still missing here?
EDIT:
Alright, I added my key back to the constructor and I'm able to upload images. New problem is that I can't open them. I get access denied every time. I added proper bucket policy but it doesn't help :(
{
"Version": "2012-10-17",
"Id": "Policy1547050603038",
"Statement": [
{
"Sid": "Stmt1547050601490",
"Effect": "Allow",
"Principal": "*",
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::bucket-name/*"
}
]
}
Using Node.js, I'm making an api that makes calls to my s3 bucket on AWS. When I try to make use putObject method, i receive this error:
message: 'Access Denied',
code: 'AccessDenied',
region: null,
time: 2018-07-27T17:08:29.555Z,
... etc
}
I have a config and credentials file in C:/User/{User}/.aws/ directory
config file:
[default]
region=us-east-2
output=json
credentials file:
[default]
aws_access_key_id=xxxxxxxxxxxxxxx
aws_secret_access_key=xxxxxxxxxxx
I created policies for both IAM user and Bucket. Here's my IAM user inline policy:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
And my bucket policy:
{
"Version": "2012-10-17",
"Id": "Policy1488494182833",
"Statement": [
{
"Sid": "Stmt1488493308547",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::134100338998:user/Test-User"
},
"Action": [
"s3:ListBucket",
"s3:ListBucketVersions",
"s3:GetBucketLocation",
"s3:Get*",
"s3:Put*"
],
"Resource": "arn:aws:s3:::admin-blog-assets"
}
]
}
And finally, my api
var fs = require('fs'),
AWS = require('aws-sdk'),
s3 = new AWS.S3('admin-blog-assets');
...
var params = {
Bucket: 'admin-blog-assets',
Key: file.filename,
Body: fileData,
ACL:'public-read'
};
s3.putObject(params, function (perr, pres) {
if (perr) {
console.log("Error uploading image: ", perr);
} else {
console.log("uploading image successfully");
}
});
I've been banging my head on this for hours, can anyone help?
I believe the source of the problem is related to how you are defining the s3 object, as s3 = new AWS.S3('admin-blog-assets');
If you look at the example used here, it has this line:
var bucketPromise = new AWS.S3({apiVersion: '2006-03-01'}).createBucket({Bucket: bucketName}).promise();
Where the argument passed to AWS.S3 is an object containing that apiVersion field. But you are passing a string value.
The S3 specific documentation overview section has more information:
Sending a Request Using S3 var s3 = new AWS.S3();
s3.abortMultipartUpload(params, function (err, data) { if (err)
console.log(err, err.stack); // an error occurred else
console.log(data); // successful response }); Locking the
API Version In order to ensure that the S3 object uses this specific
API, you can construct the object by passing the apiVersion option to
the constructor:
var s3 = new AWS.S3({apiVersion: '2006-03-01'}); You can also set the
API version globally in AWS.config.apiVersions using the s3 service
identifier:
AWS.config.apiVersions = { s3: '2006-03-01', // other service API
versions };
var s3 = new AWS.S3();
Some of the permissions you were granting were bucket permissions and others were object permissions. There are actions matching s3:Get* and s3:Put* that apply to both buckets and objects.
"Resource": "arn:aws:s3:::example-bucket" is only the bucket itself, not the objects inside it.
"Resource": "arn:aws:s3:::example-bucket/*" is only the objects in the bucket, and not the bucket itself.
You can write two policy statements, or you can combine the resources, like this:
"Resource": [
"arn:aws:s3:::example-bucket",
"arn:aws:s3:::example-bucket/*"
]
Important Security Consideration: By using s3:Put* with both the bucket and object ARNs, your policy likely violates the principle of least privilege, because you have implicitly granted this user s3:PutBucketPolicy which allows these credentials to change the bucket policy. There may be other, similar concerns. You probably do not want to give these credentials that much control.
Credit to #PatNeedham for noticing a second issue that I overlooked, the AWS.S3() constructor expects an object as its first argument, not a string.
I have been tasked with making a POST api call to elastic search api,
https://search-test-search-fqa4l6ubylznt7is4d5yxlmbxy.us-west-2.es.amazonaws.com/klove-ddb/recipe/_search
I don't have any previous experience with making api calls to AWS services.
So, I tried this -
axios.post('https://search-test-search-fqa4l6ubylznt7is4d5yxlmbxy.us-west-2.es.amazonaws.com/klove-ddb/recipe/_search')
.then(res => res.data)
.then(res => console.log(res));
But I was getting {"Message":"User: anonymous is not authorized to perform: es:ESHttpPost"}
I also checked out with some IAM roles and added AWSESFullAccess policies to my profile.
Still I can't make anything work out.
Please help me.
The reason your seeing the error User: anonymous is not authorized to perform: es:ESHttpPost is because you're making requesting data without letting ElasticSearch know who you are - this is why it says 'Anonymous'.
There are a couple ways of authentication, the easiest being using the elasticsearch library. With this library you'll give the library a set of credentials (access key, secret key) to the IAM role / user. It will use this to create signed requests. Signed requests will let AWS know who's actually making the request, so it won't be received as anonymous, but rather, yourself.
Another way of getting this to work is to adjust your access policy to be IP-based:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "es:*",
"Condition": {
"IpAddress": {
"aws:SourceIp": [
"AAA.BBB.CCC.DDD"
]
}
},
"Resource": "YOUR_ELASTICSEARCH_CLUSTER_ARN"
}
]
}
This particular policy will be wide open for anyone with the ip(range) that you provide here. It will spare you the hassle of having to go through signing your requests though.
A library that helps setting up elasticsearch-js with AWS ES is this one
A working example is the following:
const AWS = require('aws-sdk')
const elasticsearch = require('elasticsearch')
const awsHttpClient = require('http-aws-es')
let client = elasticsearch.Client({
host: '<YOUR_ES_CLUSTER_ID>.<YOUR_ES_REGION>.es.amazonaws.com',
connectionClass: awsHttpClient,
amazonES: {
region: '<YOUR_ES_REGION>',
credentials: new AWS.Credentials('<YOUR_ACCESS_KEY>', '<YOUR_SECRET_KEY>')
}
});
client.search({
index: 'twitter',
type: 'tweets',
body: {
query: {
match: {
body: 'elasticsearch'
}
}
}
})
.then(res => console.log(res));
The Elasticsearch npm package is going to be deprecated soon, use #elastic/elasticsearch and #acuris/aws-es-connection so you don't have to provide IAM Credentails to the function.
Here the code, I use:
'use strict';
const { Client } = require('#elastic/elasticsearch');
const { createAWSConnection, awsGetCredentials } = require('#acuris/aws-es-
connection');
module.exports.get_es_interests = async event => {
const awsCredentials = await awsGetCredentials();
const AWSConnection = createAWSConnection(awsCredentials);
const client = new Client({
...AWSConnection,
node: 'your-endpoint',
});
let bodyObj = {};
try {
bodyObj = JSON.parse(event.body);
} catch (jsonError) {
console.log('There was an error parsing the JSON Object', jsonError);
return {
statusCode: 400
};
}
let keyword = bodyObj.keyword;
const { body } = await client.search({
index: 'index-name',
body: {
query: {
match: {
name: {
query: keyword,
analyzer: "standard"
}
}
}
}
});
var result = body.hits.hits;
return result;
};
Now there's https://github.com/gosquared/aws-elasticsearch-js
Import them in
const AWS = require('aws-sdk');
const ElasticSearch = require('#elastic/elasticsearch');
const { createConnector } = require('aws-elasticsearch-js');
Configure client using named profile that can be found on ~/.aws/config. You can verify this by doing: cat ~/.aws/config which should output something like:
[profile work]
region=ap-southeast-2
[default]
region = ap-southeast-1
const esClient = new ElasticSearch.Client({
nodes: [
'<aws elastic search domain here>'
],
Connection: createConnector({
region: '<region>',
getCreds: callback =>
callback(
null,
new AWS.SharedIniFileCredentials({ profile: '<target profile>' })
)
})
});
Then you can start using it like:
// this query will delete all documents in an index
await esClient.delete_by_query({
index: '<your index here>',
body: {
query: {
match_all: {}
}
}
});
References:
https://github.com/gosquared/aws-elasticsearch-js
https://www.elastic.co/guide/en/elasticsearch/reference/current/docs-delete-by-query.html
https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SharedIniFileCredentials.html