AWS Video Rekognition is not publishing results to SNS Topic - node.js

Running some nodejs aws rekognition to detect labels in mp4 video, but it will not publish to the specified SNS topic when complete. I don't get any permission errors when submitting the request with the topic/ROLE arns.
const AWS = require('aws-sdk');
AWS.config.update(
{
region: 'us-west-2',
accessKeyId: "asdfadsf",
secretAccessKey: "asdfasdfasdfasd1234123423"
}
);
const params = {
Video: {
S3Object: {
Bucket: 'myvidebucket',
Name: '5d683b81760ec59c2015.mp4'
}
},
NotificationChannel: {
RoleArn: 'arn:aws:iam::xxxxxxxxxxxxx:role/AmazonRekognitionSNSSuccessFeedback',
SNSTopicArn: 'arn:aws:sns:us-west-2:xxxxxxxxxxxxx:recoknize',
},
MinConfidence: 60
};
rekognition.startLabelDetection(params).promise().then(data => {
console.log(JSON.stringify(data));
}).catch(error => {
console.log(error);
});
That code executes with no errors, and I get back a job id. My SNS topic subscription is confirmed, and supposed to post to my HTTPS endpoint. But nothing ever arrives, and there are no error logs anywhere in AWS console about this.
When I manually access the rekogniztion by jobid, the data comes back fine so I know it finished correctly. Something strange has to be going on with IAM permissions.

I have reviewed and tested your nodejs code successfully and I don't see anything wrong with it.
Since, the code returns the AWS Rekognition "JobId" successfully, you can review your SNS configuration and check if it matches the following:
1. On your SNS topic ('arn:aws:sns:us-west-2:xxxxxxxxxxxxx:recoknize'), navigate to the access policy and check if you have a policy similar to the following :
{
"Version": "2008-10-17",
"Id": "__default_policy_ID",
"Statement": [
{
"Sid": "__default_statement_ID",
"Effect": "Allow",
"Principal": {
"Service": "rekognition.amazonaws.com"
},
"Action": [
"SNS:GetTopicAttributes",
"SNS:SetTopicAttributes",
"SNS:AddPermission",
"SNS:RemovePermission",
"SNS:DeleteTopic",
"SNS:Subscribe",
"SNS:ListSubscriptionsByTopic",
"SNS:Publish",
"SNS:Receive"
],
"Resource": "arn:aws:sns:us-west-2:XXXXXXXXXXXX:AmazonRekognitionTopic"
}
]
}
2. On your IAM role ('arn:aws:iam::xxxxxxxxxxxxx:role/AmazonRekognitionSNSSuccessFeedback'), make sure of the following:
(i) The "Trust relationship" of your role has the following statement :
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service":"rekognition.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
(ii) The role has an attached policy document similar to one given below:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"sns:publish"
],
"Resource": "*"
}
]
}
The successful published message from Amazon Rekognition to SNS topic should output something similar to:
"JobId":"8acd9edd6edfb0e4985f8cd269e4863e54f7fcd451af6aafe10b32996dedbdba","Status":"SUCCEEDED","API":"StartLabelDetection","Timestamp":1568544553927,"Video":{"S3ObjectName":"final.mp4","S3Bucket":"syumak-rekognition"}}
Hope this helps.

Buried in the docs - it's apparent that
https://docs.aws.amazon.com/rekognition/latest/dg/api-video-roles.html#api-video-roles-all-topics
AmazonRekognitionServiceRole gives Amazon Rekognition Video access to
Amazon SNS TOPICS that are PREFIXED with AmazonRekognition.
It doesn't say the role ARN needs to be prefixed. But won't hurt.
Double check your TOPIC is AmazonRekognitionMyTopicName
RoleArn: 'arn:aws:iam::xxxxxxxxxxxxx:role/AmazonRekognitionSNSSuccessFeedback', <- don't think this is so important.
SNSTopicArn: 'arn:aws:sns:us-west-2:xxxxxxxxxxxxx:recoknize', <- Must be something like AmazonRekognitionSuccess
Also - this helped / I moved off the FIFO which allows subscribing via email in addition to SQS.
https://docs.aws.amazon.com/rekognition/latest/dg/video-troubleshooting.html
This line
Verify that you have an IAM service role that gives Amazon Rekognition Video permissions to publish to your Amazon SNS topics. For more information, see Configuring Amazon Rekognition Video.
I created a new IAM and gave it
AmazonRekognitionFullAccess
AmazonSNSRole
AmazonSNSFullAccess
I updated the trust relationship to include both sns.amazonaws.com /
rekognition.amazonaws.com.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": [
"sns.amazonaws.com",
"rekognition.amazonaws.com"
]
},
"Action": "sts:AssumeRole"
}
]
}
Not sure which one of these made everything click - but was a good half day on this / hopefully this will save someone some time.

Trust relationship solved it for me. Add the below script to the trust relationship of the IAM that will be used as RoleARn for the script:
{
"Version": "2012-10-17",
"Statement": [{
"Effect": "Allow",
"Principal": {
"Service": [
"sns.amazonaws.com",
"rekognition.amazonaws.com",
"sagemaker.amazonaws.com"
]
},
"Action": "sts:AssumeRole",
"Condition": {}
}]
}

Related

Access Denied issue in AWS Cross Account S3 PutObject encrypted by AWS Managed Key

I am trying to put a text file from Lambda which is in Account B to S3 bucket in account A. S3 bucket(test-bucket) is having AWS-KMS encryption with aws/s3 Managed Key enabled.
1. I added below permissions in Account A- S3 bucket (test-bucket):
```
{"Version": "2012-10-17",
"Id": "ExamplePolicy",
"Statement": [
{
"Sid": "ExampleStmt",
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::AccountB:role/Lambda-Role"
},
"Action": "s3:*",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
Added below inline policy to my Lambda execution role in Account B:
{"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
}
]
}
This is my Lambda Code:
res = s3.put_object(
Body=message,
Key=file_name,
Bucket='test-bucket',
ACL='bucket-owner-full-control'
)
Getting below error while running this code from Account B Lambda:
An error occurred (AccessDenied) when calling the PutObject operation: Access Denied
Since the S3 bucket is encrypted by AWS Managed Key so I cannot edit the KMS policy what we do in case of Customer Managed Key.
Someone please guide me what am I missing.
Try granting your lambda function s3:PutObject action permission. So the inline policy of your lambda role should be something like
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt",
"kms:GenerateDataKey",
"kms:DescribeKey",
"kms:ReEncrypt*"
],
"Resource": [
"arn:aws:kms:us-west-2:AccountA:key/AWS-KMS-ID"
]
},
{
"Effect": "Allow",
"Action": "s3:PutObject",
"Resource": "arn:aws:s3:::test-bucket/*"
}
]
}
I've been troubleshooting this for a couple of hours myself.
I don't believe this is possible with the default "AWS Managed Key" when using SSE-KMS. Instead you have to create a CMK and grant the cross account user access to this key.
HTH
Cross account access cannot be granted for AWS Managed Key. Need to use customer managed key or default encryption.
This can be useful- https://aws.amazon.com/premiumsupport/knowledge-center/s3-bucket-access-default-encryption/

AWS S3 403 access denied issue with nodeJS

The following is my bucket policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "AddCannedAcl",
"Effect": "Allow",
"Principal": {
"AWS": "==mydetails=="
},
"Action": [
"s3:PutObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::etcetera-dev/*",
"Condition": {
"StringEquals": {
"s3:x-amz-acl": "public-read"
}
}
}
]
}
This is my Iam user inline policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:ListAllMyBuckets",
"s3:PutObject",
"s3:GetObject",
"s3:PutObjectAcl"
],
"Resource": [
"arn:aws:s3:::*"
]
}
]
}
Now I'm trying to upload a file using multer-s3 with acl:'public-read' and I am getting 403 access denied. If I don't use acl property in multer, I am able to upload with no issues.
You may have fixed this now, but if you haven't, there a many different possible fixes (See: https://aws.amazon.com/premiumsupport/knowledge-center/s3-troubleshoot-403/).
But I ran into the same problem, and what fixed it for me was the following.
I presume you're calling s3.upload() when trying to upload your file. I found that if there is not Bucket parameter within your upload() options, you will also receive a 403.
i.e ensure your upload() call is as the following:
await s3.upload({
Bucket: // some s3Config.Bucket
Body: // Stream or File,
Key: // Filename,
ContentType: // Mimetype
}).promise();
Bucket: // some s3Config.Bucket - I was missing this param in the function-call as I thought that new AWS.S3(config) handled the bucket. Turns out, you should always add the bucket to your upload params.

Python3.7 script to export CloudWatch logs to S3

I am using below code to copy CloudWatch logs to S3:-
import boto3
import collections
from datetime import datetime, date, time, timedelta
region = 'eu-west-1'
def lambda_handler(event, context):
yesterday = datetime.combine(date.today()-timedelta(1),time())
today = datetime.combine(date.today(),time())
unix_start = datetime(1970,1,1)
client = boto3.client('logs')
response = client.create_export_task(
taskName='Export_CloudwatchLogs',
logGroupName='/aws/lambda/stop-instances',
fromTime=int((yesterday-unix_start).total_seconds() * 1000),
to=int((today -unix_start).total_seconds() * 1000),
destination='bucket',
destinationPrefix='bucket-{}'.format(yesterday.strftime("%Y-%m-%d"))
)
return 'Response from export task at {} :\n{}'.format(datetime.now().isoformat(),response)
I gave below policy to role:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"logs:CreateLogGroup",
"logs:CreateLogStream",
"logs:PutLogEvents",
"logs:DescribeLogStreams",
"logs:CreateExportTask",
"logs:DescribeExportTasks",
"logs:DescribeLogGroups"
],
"Resource": [
"arn:aws:logs:*:*:*"
]
}
]
}
EOF
2nd policy:-
policy = <<EOF
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"s3:PutObject",
"s3:PutObjectAcl",
"s3:GetBucketAcl"
],
"Effect": "Allow",
"Resource": ["arn:aws:s3:::${var.source_market}-${var.environment}-${var.bucket}/*"],
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } }
}
]
}
EOF
I am getting below error if I execute this in AWS console:-
{
"errorMessage": "An error occurred (InvalidParameterException) when calling the CreateExportTask operation: GetBucketAcl call on the given bucket failed. Please check if CloudWatch Logs has been granted permission to perform this operation.",
"errorType": "InvalidParameterException"
I have referred many blocks after appending role with appropriate policies.
Check the encryption settings on your bucket. I had the same problem and it was because I had it set to AWS-KMS. I was getting this error with the same permissions you have and then it started working as soon as I switched the encryption to AES-256
It seems like an issue with s3 bucket permissions. You need to attach this policy to your s3 bucket. Please amend the policy by changing the bucket name and aws region for cloudwatch.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": "s3:GetBucketAcl",
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs",
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
},
{
"Action": "s3:PutObject" ,
"Effect": "Allow",
"Resource": "arn:aws:s3:::my-exported-logs/random-string/*",
"Condition": { "StringEquals": { "s3:x-amz-acl": "bucket-owner-full-control" } },
"Principal": { "Service": "logs.us-west-2.amazonaws.com" }
}
]}
https://docs.aws.amazon.com/AmazonCloudWatch/latest/logs/S3ExportTasksConsole.html
I had the same error, the issue was that I put on "destination" parameter something like bucket/something while on the policy I just had bucket, so removing the something prefix on the parameter fixed the problem, so check that the policy and the parameter match.

AWS Elastic Search Policy, only allow lambda to access Elastic Search

I'm working on setting up an ElasticSearch instance on AWS. My goal is to only allow http request from my Lambda function to the ElasticSearch instance. I have created one policy, that gives the 'Lambdaaccess to theElasticSearchinstance. The part I'm struggling with is the inline resource policy forElasticSearchthat will deny all other request that aren't from the 'Lambda.
I have tried setting the ElasticSearch resource policy to Deny all request and then giving my Lambda a role with access to ElasticSearch. While the Lambda is using that role I am signing my http requests using axios and aws4 but the request are rejected with The request signature we calculated does not match the signature you provided. I don't think the issue is the actual signing of the request but instead the polices I created. If anyone can steer me in the right direction that would really help.
Lambda Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "VisualEditor0",
"Effect": "Allow",
"Action": [
"es:ESHttpGet",
"es:CreateElasticsearchDomain",
"es:DescribeElasticsearchDomainConfig",
"es:ListTags",
"es:ESHttpDelete",
"es:GetUpgradeHistory",
"es:AddTags",
"es:ESHttpHead",
"es:RemoveTags",
"es:DeleteElasticsearchDomain",
"es:DescribeElasticsearchDomain",
"es:UpgradeElasticsearchDomain",
"es:ESHttpPost",
"es:UpdateElasticsearchDomainConfig",
"es:GetUpgradeStatus",
"es:ESHttpPut"
],
"Resource": "arn:aws:es:us-east-1:,accountid>:domain/<es-instance>"
},
{
"Sid": "VisualEditor1",
"Effect": "Allow",
"Action": [
"es:PurchaseReservedElasticsearchInstance",
"es:DeleteElasticsearchServiceRole"
],
"Resource": "*"
}
]
}
ElasticSearch Inline Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Deny",
"Principal": {
"AWS": [
"*"
]
},
"Action": [
"es:*"
],
"Resource": "arn:aws:es:us-east-1:<account-number>:domain/<es-instance>/*"
}
]
}
Lambda Code Using Aws4 and Axios
//process.env.HOST = search-<es-instance>-<es-id>.us-east-1.es.amazonaws.com
function createRecipesIndex(url, resolve, reject){
axios(aws4.sign({
host: process.env.HOST,
method: "PUT",
url: "https://" + process.env.HOST,
path: '/recipes/',
}))
.then(response => {
console.log("----- SUCCESS INDEX CREATED -----");
resolve();
})
.catch(error => {
console.log("----- FAILED TO CREATE INDEX -----");
console.log(error);
reject();
});
}
Note: I have tried creating my index with the inline policy on ElasticSearch set to allow *(all) and removing the aws4 library signature and it works fine. Right now I just want to secure access to this resource.
I found the solution to my issue and it was 2 fold. The first issue was my inline resource policy on my ElasticSearch instance. I needed to update it to allow the role that I have given to my Lambda. This was done by getting the role arn from IAM and then creating the below policy to be attached inline on the ElasticSearch instance.
My second issue was with aws4. the path and the url I set did not match. My path had /xxxx/ while my url was https://search-<es-instance>-<es-id>.us-east-1.es.amazonaws.com/xxxx. Since the path contained an extra forward slash not found in the url, the signing failed. For anyone else using the library make sure those values are consistent. I hope this helps someone else out in the future :D
Elastic Search Policy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::<account-id>:role/service-role/<role-name>"
},
"Action": "es:*",
"Resource": "arn:aws:es:us-east-1:<account-id>:domain/<es-instance>/*"
}
]
}

Unable to authenticate aws iot subscribe through node sdk

I need to integrate a AWS IOT based MQTT Service. Some other developer already setup the MQTT and give me the aws account credentials. They also gave us two topics name. one for publishing data other for subscribing for getting status data.
For testing purpose I created a device into AWS IOT Panel and it give me the node iot sdk download. Which I setup on local machine. Then I playing with device-example script in examples folder. I modify aws policy attached with my device for allow to access two topics one for publish and one for subscribe.
But all this failed. Script given following output.
connect
offline
close
reconnect
connect
offline
close
and so on..
When I checked in AWS CloudWatch Logs for IOT I got issue.
{
"timestamp": "2018-10-25 07:13:10.056",
"logLevel": "ERROR",
"traceId": "TRACEID",
"accountId": "ACCOUNTID",
"status": "Failure",
"eventType": "Subscribe",
"protocol": "MQTT",
"topicName": "status topic name",
"clientId": "sdk-nodejs-uuid",
"principalId": "clientid",
"sourceIp": "IP",
"sourcePort": PORT,
"reason": "AUTHORIZATION_FAILURE"
}
My changed policy are
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:Publish",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Subscribe",
"iot:Receive"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
},
{
"Effect": "Allow",
"Action": [
"iot:Connect"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/publish-topic-name",
"arn:aws:iot:us-east-2:clientid:topic/subscribe-topic-name"
]
}
]
}
Then I even gave all iot permission for all topics but still get authentication error
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
For publish I get only connect console output and also did not get any logs on cloud watch so I am not very sure if it succeed or not.
UPDATE: Ok i got the issue after some search and that is to add topicfilter along with topic in policy. It look like required for subscribe topics. Updated policy is below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"iot:*"
],
"Resource": [
"arn:aws:iot:us-east-2:clientid:client/sdk-nodejs-*",
"arn:aws:iot:us-east-2:clientid:topicfilter/*",
"arn:aws:iot:us-east-2:clientid:topic/*"
]
}
]
}
Have you also configured an IoT policy? To connect to the IoT platform with an IAM user (MQTT over WSS), you do not only need an IAM policy which allows access, but also an IoT policy which does so. On top of this, you should check if your policies are using the correct resource identifier. There is a difference between how resources are defined for iot:publish versus iot:subscribe.

Resources