I can compare two images if they are located in the root of the S3 bucket.
const params = {
SourceImage: {
S3Object: {
Bucket: bucket,
Name: 'source.jpg'
}
},
TargetImage: {
S3Object: {
Bucket: bucket,
Name: 'target.jpg'
}
},
SimilarityThreshold: 90
}
But I get an error if they are in sub-folders:
message: 'Request has invalid parameters',
code: 'InvalidParameterException',
time: 2019-11-25T13:12:44.498Z,
requestId: '7ac7f297-fc36-436b-a1dc-113d419da766',
statusCode: 400,
retryable: false, retryDelay: 71.0571139838835
If i try to compare images in sub-folders (note I tried with './', '/' before the path - same thing)
const params = {
SourceImage: {
S3Object: {
Bucket: bucket,
Name: '/sub1/sub2/source.jpg'
}
},
TargetImage: {
S3Object: {
Bucket: bucket,
Name: '/sub1/sub2/target.jpg'
}
},
SimilarityThreshold: 90
}
I really need the photos to be in sub-folders. Any help would be appreciated.
Here's a working example.
import boto3
reko=boto3.client('rekognition')
resp = reko.compare_faces(
SourceImage={
'S3Object': {
'Bucket': 'jsimon-public-us',
'Name': 'pref1/image1.jpg',
}},
TargetImage={
'S3Object': {
'Bucket': 'jsimon-public-us',
'Name': 'pref2/image2.jpg',
}}
)
print(resp)
Related
I have a javascript file on my NodeJS server that runs at 00:00:00 and updates some fields in the database, if that happens I want to send out a push notification to some users. I've set this up in my Javascript file:
https://dev.to/devsmranjan/web-push-notification-with-web-push-angular-node-js-36de
const subscription = {
endpoint: '',
expirationTime: null,
keys: {
auth: '',
p256dh: '',
},
};
const payload = {
notification: {
title: 'Title',
body: 'This is my body',
icon: 'assets/icons/icon-384x384.png',
actions: [
{action: 'bar', title: 'Focus last'},
{action: 'baz', title: 'Navigate last'},
],
data: {
onActionClick: {
default: {operation: 'openWindow'},
bar: {
operation: 'focusLastFocusedOrOpen',
url: '/signin',
},
baz: {
operation: 'navigateLastFocusedOrOpen',
url: '/signin',
},
},
},
},
};
const options = {
vapidDetails: {
subject: 'mailto:example_email#example.com',
publicKey: process.env.REACT_APP_PUBLIC_VAPID_KEY,
privateKey: process.env.REACT_APP_PRIVATE_VAPID_KEY,
},
TTL: 60,
};
webpush.sendNotification(subscription, JSON.stringify(payload), options)
.then((_) => {
console.log(subscription);
console.log('SENT!!!');
console.log(_);
})
.catch((_) => {
console.log(subscription);
console.log(_);
});
But when I run the file I get the message:
{ endpoint: '', expirationTime: null, keys: { auth: '', p256dh: '' } } Error: You must pass in a subscription with at least an endpoint.
Which makes sense since the server has no idea about service workers etc. Any suggestions on how to proceed?
I have a Lambda function that makes a GetObject request to an S3 bucket.
However, I'm getting the following error:
AccessDenied: Access Denied
at deserializeAws_restXmlGetObjectCommandError (/node_modules/#aws-sdk/client-s3/dist-cjs/protocols/Aws_restXml.js:6284:41)
at processTicksAndRejections (internal/process/task_queues.js:95:5)
at /node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:6:20
at /node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:11:20
at StandardRetryStrategy.retry (/node_modules/#aws-sdk/middleware-retry/dist-cjs/StandardRetryStrategy.js:51:46)
at /node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:6:22
at GetS3Data (/src/input.ts:21:26)
at Main (/src/main.ts:8:34)
at Runtime.run [as handler] (/handler.ts:6:9) {
Code: 'AccessDenied',
RequestId: '3K61PMQGW4825D3W',
HostId: '5PpmWpu2I4WZPx37Y0pRfDAcdCmjX8fchuE+HLpUzy7uqoJirtb9Os0g96kWfluM/ctkn/mEC5o=',
'$fault': 'client',
'$metadata': {
httpStatusCode: 403,
requestId: undefined,
extendedRequestId: '5PpmWpu2I4WZPx37Y0pRfDAcdCmjX8fchuE+HLpUzy7uqoJirtb9Os0g96kWfluM/ctkn/mEC5o=',
cfId: undefined,
attempts: 1,
totalRetryDelay: 0
}
}
I've given access to the Lambda function to make this request.
What is the issue?
serverless.ts
import type { AWS } from "#serverless/typescript";
const serverlessConfiguration: AWS = {
service: "affiliations",
frameworkVersion: "2",
custom: {
esbuild: {
bundle: true,
minify: false,
sourcemap: true,
exclude: ["aws-sdk"],
target: "node14",
define: { "require.resolve": undefined },
platform: "node",
},
},
plugins: ["serverless-esbuild"],
provider: {
name: "aws",
region: "us-east-2",
runtime: "nodejs14.x",
apiGateway: {
minimumCompressionSize: 1024,
shouldStartNameWithService: true,
},
environment: {
AWS_NODEJS_CONNECTION_REUSE_ENABLED: "1",
NODE_OPTIONS: "--enable-source-maps --stack-trace-limit=1000",
},
lambdaHashingVersion: "20201221",
vpc: {
securityGroupIds: ["<redacted>"],
subnetIds: ["redacted>"],
},
iam: {
role: {
statements: [
{
Effect: "Allow",
Action: ["s3:GetObject"],
Resource: "<redacted>",
},
],
},
},
},
useDotenv: true,
// import the function via paths
functions: {
run: {
handler: "handler.run",
timeout: 300,
events: [
{
sns: {
arn: "<redacted>",
},
},
],
},
},
};
module.exports = serverlessConfiguration;
s3.ts
export const GetS3Data = async (payload: GetObjectRequest) => {
try {
const response = await S3Service.getObject(payload);
const result = await new Promise((resolve, reject) => {
const data = [];
response.Body.on("data", (chunk) => data.push(chunk));
response.Body.on("err", reject);
response.Body.once("end", () => resolve(data.join("")));
});
return [result, null];
} catch (err) {
Logger.error({
method: "GetS3Data",
error: err.stack,
});
return [null, err];
}
};
package.json
"#aws-sdk/client-s3": "^3.36.0",
Forgot to add /* to the end of the resource
Resource: "<redacted>/*",
Your 403 Access Denied error is masking a 404 Not Found error, as your code & Serverless config looks perfectly fine & should work as expected provided you've specified the resource correctly.
If you do not have correct s3:ListBucket permissions, the S3 endpoint will not return a 404 Not Found error if the object does not exist for the specified key.
GetObject's API reference highlights this nuance:
If you have the s3:ListBucket permission on the bucket, Amazon S3 will return an HTTP status code 404 ("no such key") error.
If you donโt have the s3:ListBucket permission, Amazon S3 will return an HTTP status code 403 ("access denied") error.
This is to prevent attackers from enumerating public buckets & knowing what objects actually exist in the bucket.
The absence of a 404, in this case, is not allowing information to be leaked on if the object exists or not (just like an Invalid Credentials message on a login page as opposed to Invalid Password which indicates a user with the provided username exists).
Provide the Lambda with permission to carry out the s3:ListBucket action to unmask the 404 error and/or ultimately double-check your GetObjectRequest to make sure the key is being specified correctly for an object that does exist:
iam: {
role: {
statements: [
{
Effect: "Allow",
Action: ["s3:GetObject", "s3:ListBucket"],
Resource: "<redacted>",
},
],
},
}
I am trying to create an application in my local system and deploy it to the AWS cloud using SAM CLI. The basic outline of this application is given in the diagram.
I have created a directory named myproj for this application and all the sub-folders and files are shown in the following diagram.
The template.yaml file consists of the following code -
AWSTemplateFormatVersion: '2010-09-09'
Transform: AWS::Serverless-2016-10-31
Resources:
myDB:
Type: AWS::Serverless::SimpleTable
Properties:
TableName: tabmine
PrimaryKey:
Name: id
Type: String
ProvisionedThroughput:
ReadCapacityUnits: 5
WriteCapacityUnits: 5
LambdaWrite:
Type: AWS::Serverless::Function
Properties:
CodeUri: functionWrite/
Handler: write.handler
Runtime: nodejs12.x
Events:
apiForLambda:
Type: Api
Properties:
Path: /writedb
Method: post
Policies:
DynamoDBWritePolicy:
TableName: !Ref myDB
LambdaRead:
Type: AWS::Serverless::Function
Properties:
CodeUri: functionRead/
Handler: read.handler
Runtime: nodejs12.x
Events:
apiForLambda:
Type: Api
Properties:
Path: /readdb
Method: post
Policies:
DynamoDBReadPolicy:
TableName: !Ref myDB
In the functionRead folder, package.json has the following contents -
{
"name": "myproj",
"version": "1.0.0",
"description": "",
"main": "read.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"aws-sdk": "^2.783.0"
}
}
And the read.js file contains the following code -
var AWS = require('aws-sdk');
var ddb = new AWS.DynamoDB.DocumentClient();
exports.handler = function(event, context, callback) {
var ID = event.id;
var params = {
TableName: 'tabmine',
Key: {
'id': ID
}
};
ddb.get(params, callback);
};
In functionWrite folder, the file package.json has the following content -
{
"name": "myproj",
"version": "1.0.0",
"description": "",
"main": "write.js",
"scripts": {
"test": "echo \"Error: no test specified\" && exit 1"
},
"author": "",
"license": "ISC",
"dependencies": {
"aws-sdk": "^2.783.0"
}
}
And the file write.js has the following content -
var AWS = require('aws-sdk');
var ddb = new AWS.DynamoDB.DocumentClient();
exports.handler = function(event, context, callback) {
var ID = event.id;
var NAME = event.name;
var params = {
TableName: 'tabmine',
Item: {
'id': ID,
'name': NAME
}
};
ddb.put(params, callback);
};
Then, I have navigated back to the myproj directory in the terminal and ran the command sam build.
After building was done, I ran the command sam deploy --guided and followed the steps to deploy the stack to the cloud. Then, I checked the console to confirm the deployment and it was successful.
Then, in the terminal, I ran curl -X POST -d '{"id":"one","name":"john"}' https://0000000000.execute-api.ap-south-1.amazonaws.com/Prod/writedb. But I got {message : 'Internal server Error'}.
To confirm if the lambda and dynamoDB are correctly linked or not, I went to Lambda console and created a test event for the lambda function with name write.js with the same payload {"id":"one","name":"john"}.It ran successfully and entered the those two items into the dyanmoDB table. Similarly, I have created another test event for the lambda function with name read.js and with payload {"id":"one"}. It also ran successfully and displayed the datas.
To confirm if the API gateway and the lambdas are are correctly linked or not, I ran the test in API gateway for both the resources /writedb and /readdb, but it is giving me {message : 'Internal server Error'}.
Please help me out of this problem.
The event object that will come from the API gateway invocation will have the following structure -
{
resource: '/func1',
path: '/func1',
httpMethod: 'POST',
headers: {
accept: '*/*',
'content-type': 'application/x-www-form-urlencoded',
Host: '0000000000.execute-api.ap-south-1.amazonaws.com',
'User-Agent': 'curl/7.71.1',
'X-Amzn-Trace-Id': 'Root=1-5fa018aa-60kdfkjsddd5f6c6c07a2',
'X-Forwarded-For': '178.287.187.178',
'X-Forwarded-Port': '443',
'X-Forwarded-Proto': 'https'
},
multiValueHeaders: {
accept: [ '*/*' ],
'content-type': [ 'application/x-www-form-urlencoded' ],
Host: [ '0000000000.execute-api.ap-south-1.amazonaws.com' ],
'User-Agent': [ 'curl/7.71.1' ],
'X-Amzn-Trace-Id': [ 'Root=1-5fa018aa-603d90fhgdhdgdjhj6c6dfda2' ],
'X-Forwarded-For': [ '178.287.187.178' ],
'X-Forwarded-Port': [ '443' ],
'X-Forwarded-Proto': [ 'https' ]
},
queryStringParameters: null,
multiValueQueryStringParameters: null,
pathParameters: null,
stageVariables: null,
requestContext: {
resourceId: 'scsu6k',
resourcePath: '/func1',
httpMethod: 'POST',
extendedRequestId: 'VYjfdkjkjfjkfuA=',
requestTime: '02/Nov/2020:14:33:14 +0000',
path: '/test/func1',
accountId: '00000000000',
protocol: 'HTTP/1.1',
stage: 'test',
domainPrefix: 'f8h785q05a',
requestTimeEpoch: 1604327594697,
requestId: '459e0256-9c6f-4a24-bcf2-05520d6bc58a',
identity: {
cognitoIdentityPoolId: null,
accountId: null,
cognitoIdentityId: null,
caller: null,
sourceIp: '178.287.187.178',
principalOrgId: null,
accessKey: null,
cognitoAuthenticationType: null,
cognitoAuthenticationProvider: null,
userArn: null,
userAgent: 'curl/7.71.1',
user: null
},
domainName: '000000000.execute-api.ap-south-1.amazonaws.com',
apiId: 'lkjfslkfj'
},
body: '{"id":"1","name":"John"}',
isBase64Encoded: false
}
From the above event which is JSON object, we need the body. Since, the body is a string, so we need to convert it into JSON object.
So, write.js should be modified to -
var AWS = require('aws-sdk');
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
exports.handler = async (event) => {
try {
//console.log(event);
//console.log(event.body);
var obj = JSON.parse(event.body);
//console.log(obj.id);
//console.log(obj.name);
var ID = obj.id;
var NAME = obj.name;
var params = {
TableName:'tabmine',
Item: {
id : {S: ID},
name : {S: NAME}
}
};
var data;
var msg;
try{
data = await ddb.putItem(params).promise();
console.log("Item entered successfully:", data);
msg = 'Item entered successfully';
} catch(err){
console.log("Error: ", err);
msg = err;
}
var response = {
'statusCode': 200,
'body': JSON.stringify({
message: msg
})
};
} catch (err) {
console.log(err);
return err;
}
return response;
};
Similarly, read.js will get modified to -
var AWS = require('aws-sdk');
var ddb = new AWS.DynamoDB({apiVersion: '2012-08-10'});
exports.handler = async (event) => {
try {
//console.log(event);
//console.log(event.body);
var obj = JSON.parse(event.body);
//console.log(obj.id);
//console.log(obj.name);
var ID = obj.id;
var params = {
TableName:'tabmine',
Key: {
id : {S: ID}
}
};
var data;
try{
data = await ddb.getItem(params).promise();
console.log("Item read successfully:", data);
} catch(err){
console.log("Error: ", err);
data = err;
}
var response = {
'statusCode': 200,
'body': JSON.stringify({
message: data
})
};
} catch (err) {
console.log(err);
return err;
}
return response;
};
So, this will solve the problem. And template.yaml file is also correct.
I have an Apollo GraphQL server using the apollo-server-plugin-response-cache plugin and I need to determine whether or not I'm going to write to the cache based on incoming parameters. I have the plugin set up and I'm using the shouldWriteToCache hook. I can print out the GraphQLRequestContext object that gets passed into the hook, and I can see the full request source, but request.variables is empty. Other than parsing the query itself, how can I access the actual params for the resolver in this hook? (In the example below, I need the value of param2.)
Apollo Server:
new ApolloServer({
introspection: true,
playground: true,
subscriptions: false,
typeDefs,
resolvers,
cacheControl: {
defaultMaxAge: 60
},
plugins: [
apolloServerPluginResponseCache({
cache, // This is a "apollo-server-cache-redis" instance
shouldWriteToCache: (requestContext) => {
// I get a lot of info here, including the source query, but not the
// parsed out query variables
console.log(requestContext.request);
// What I want to do here is:
return !context.request.variables.param2
// but `variables` is empty, and I can't see that value parsed anywhere else
}
})
]
})
Here is my resolver:
export async function exapi(variables, context) {
// in here I use context.param1 and context.param2
// ...
}
I have also tried:
export async function exapi(variables, { param1, param2 }) {
// ...
}
Here is what I get logged out from the code above:
{
query: '{\n' +
' exapi(param1: "value1", param2: true) {\n' +
' records\n' +
' }\n' +
'}\n',
operationName: null,
variables: {}, // <-- this is empty?! How can I get param2's value??
extensions: undefined,
http: Request {
size: 0,
timeout: 0,
follow: 20,
compress: true,
counter: 0,
agent: undefined,
[Symbol(Body internals)]: { body: null, disturbed: false, error: null },
[Symbol(Request internals)]: {
method: 'POST',
redirect: 'follow',
headers: [Headers],
parsedURL: [Url],
signal: null
}
}
}
If you didn't provide variables for GraphQL query, you could get the arguments from the GraphQL query string via ArgumentNode of AST
If you provide variables for GraphQL query, you will get them from requestContext.request.variables.
E.g.
server.js:
import apolloServerPluginResponseCache from 'apollo-server-plugin-response-cache';
import { ApolloServer, gql } from 'apollo-server';
import { RedisCache } from 'apollo-server-cache-redis';
const typeDefs = gql`
type Query {
exapi(param1: String, param2: Boolean): String
}
`;
const resolvers = {
Query: {
exapi: (_, { param1, param2 }) => 'teresa teng',
},
};
const cache = new RedisCache({ host: 'localhost', port: 6379 });
const server = new ApolloServer({
introspection: true,
playground: true,
subscriptions: false,
typeDefs,
resolvers,
cacheControl: {
defaultMaxAge: 60,
},
plugins: [
apolloServerPluginResponseCache({
cache,
shouldWriteToCache: (requestContext) => {
console.log(requestContext.document.definitions[0].selectionSet.selections[0].arguments);
return true;
},
}),
],
});
server.listen().then(({ url }) => console.log(`๐ Server ready at ${url}`));
GraphQL query:
query{
exapi(param1: "value1", param2: true)
}
Server logs print param1 and param2 arguments:
๐ Server ready at http://localhost:4000/
[]
[ { kind: 'Argument',
name: { kind: 'Name', value: 'param1', loc: [Object] },
value:
{ kind: 'StringValue',
value: 'value1',
block: false,
loc: [Object] },
loc: { start: 15, end: 31 } },
{ kind: 'Argument',
name: { kind: 'Name', value: 'param2', loc: [Object] },
value: { kind: 'BooleanValue', value: true, loc: [Object] },
loc: { start: 33, end: 45 } } ]
I want to list file in my s3 bucket with in the folder, my bucket structure is
/> folder_1
a_bucket -> folder_2
\> folder_3
I want list files in folder_1 only
My problem got solved
s3client.list({ "prefix": "folder_1/" + filePrefix }, function (err, data) {
/* `data` will look roughly like:
{
Prefix: 'my-prefix',
IsTruncated: true,
MaxKeys: 1000,
Contents: [
{
Key: 'whatever'
LastModified: new Date(2012, 11, 25, 0, 0, 0),
ETag: 'whatever',
Size: 123,
Owner: 'you',
StorageClass: 'whatever'
},
โฎ
]
}
*/
}