how to store personal information for an image using amazon rekognition - amazon-rekognition

i am trying to create a app which makes use of amazon rekogition in aws for identification of a person and retrieving the personal information for an internal storage system.i wanted to know how to connect the amazon rekognition part
and the information stored in the database.The face detection part will be done by amazon rekognition but how will store and retrieve the personal information after detection of face

You can attach ExternalImageId while you are training the face, and that ExternalImageId can map to the primary key in your database.
ExternalImageId is returned in the search results of all APIs.
Refer:
https://docs.aws.amazon.com/rekognition/latest/dg/API_Face.html#rekognition-Type-Face-ExternalImageId

Related

Is there a solution for encrypting data in DynamoDB for nodejs

I’m researching solutions for data encryption in dynamodb.
I found that there’s a library DynamoDB Encryption Client, but it’s only for Java/Python(our project is written in nodejs).
Also, there’s another package aws encryption SDK which works with AWS KMS.
The SDK can’t encrypt on table level. That means I have to traverse through all fields in an item and encrypt each one before storing it in DB. It doesn't seem to be a great idea.
To achieve what you want, you have to save the encrypted data in DynamoDB and encrypt/decrypt at application layer. Encryption for data from your application layer onwards is also your responsibility. AWS provides many services which helps us implement the security. So the dataflow will be something like:
Application(encrypt) -> DynamoDB
DynamoDB -> (decrypt) Application
Examples of encryption using Node (JS) using AWS KMS service can be found here: https://docs.aws.amazon.com/encryption-sdk/latest/developer-guide/js-examples.html

Can Azure computer vision API access image file in AWS S3?

AWS is currently the approved cloud vendor in my organization. For a new use case related to OCR, I'm exploring the Computer Vision service in Azure (read that this service is better than the corresponding AWS Textract service). Our approach is to maintain the input image files in S3, use AWS Lambda function to invoke the Azure Computer Vision service (either through REST API or Python SDK). AWS will be the primary cloud vendor for most aspects (specifically storage) and we plan to access Azure services through API for additional needs.
My question is will Azure API/SDK accept the image file in S3 as input (of course we will do whatever is needed to make the file in S3 securely accessible to Azure API)? When I read the Azure documentation, it says the image should be accessible as an URL and there is no mention that the image needs to exist in Azure storage. Should the image URL be publicly accessible (believe this should not be the case)?
Unfortunately the Cognitive API does not support authentication when passing a URL.
An approach would be to write an Azure Function that:
Authenticates correctly to S3 and read the image file using the Amazon S3 API
Covert the file in to a ByteArray
Send the byte array to the Computer Vision API
Receive the JSON result from the Computer Vision API and process accordingly
You could have the Azure function perform the business logic you need and process and store the results, or you could build the function as a web service proxy that takes a S3 location in as a parameter and returns the Computer Vision result.
You could also build the functionality out in AWS using Lambda.

Amazon web services, How to start

Started working with AWS, Have huge account of EXCEL data to be stored in AWS and need to access those data form AWS API's. Pleas help me where to start for this.
Not clear about the use case. can anyone explain me the difference between Amazon Simple Storage Service (Amazon S3), Amazon Elastic Compute Cloud (Amazon EC2), Amazon SimpleDB in simple words.
Please help me in how start from scratch.
Thanks,
In simple terms:
S3: File Storage
EC2: Virtual Servers (Linux or Windows)
Simple DB: A NoSQL database. Most people use the newer DynamoDB service these days.
You really aren't giving enough information for anyone to provide detailed help. You could store the data as files in S3, and possibly query it using Athena. You could store the files on a file system on an EC2 server and run any Windows or Linux program there to work with the files. You could store the data in a relational OLTP database using the RDS service. You could store the data in an OLAP database using Redshift. You could store the data in a NoSQL database using DynamoDB. If it is an extremely large amount of data you might want to look into using the Elastic Map Reduce service to process it.

Query images in object storage by metadata

I have over 10GB of images for my ecommerce app. I think move them to object storage (S3, Azure, Google, etc.).
So I will have an opportunity to add custom data to metadata (like NOSQL). For example, I have an image and corresponding metadata: product_id, sku, tags.
I want to query my images by metadata? For example, get all images from my object storage where meta_key = 'tag' and tag = 'nature'
So, object storage should have indexing capabilities. I do not want to iterate over billion of images to find only one of them.
I'm new to amazon aws, azure, google, openstack. I know that Amazon S3 is able to store metadata, but It doesn't have indexes (like Apache Solr).
What service is best suited to query files|objects by custom metadata?
To do this in AWS your best best is going to be to pair the object store (S3) with a traditional database to store the meta data for easy querying.
Depending on your needs DynamoDB or RDS (in the flavor of your choice) would be 2 AWS technologies to consider for the meta-data storage and retrieval.

Amazon Cloud, storing assets

I am creating a web app for some co-working. I have text with assets (most of them pictures in print quality, say 5MB, all in all about 5GB per month).
I am going to host this on amazon's cloud using EC2 instances for a node.js server and a mongodb with attached block storage.
The assets are private to an object so not everyone will have access to it.
How should i handle the assets? Save them as binarys in the database or load them up to S3? (or any other amazon service)
Does somebody have experience on this? Or maybe some helpful links. Thanks in advance
I would probably store the assets on S3 without public access, then you can grant access to authorized users by generating temporal signed urls from your webservers when needed.
This way you can leverage your servers complexity by handing over the storage dirty work to S3, and you can still have your files accessed only by who has access to them.
If you need to do access control there are lots of things you could do but the most obvious would be to server the asserts through your web server and have it implement the access control logic desired. Your app could proxy through the source object from S3 or MongoDB GridFS. If you are already using MongoDB, in this particular case I would use GridFS unless you want some of the cost saving features of S3 such as reduced redundancy storage.

Resources