Check if a file exists in s3 using graphql code - node.js

Can someone tell me how to check if a file exists in s3 bucket using graphql code.
As of now i am using :
await s3.headObject(existParams).promise();
But the above is not working I just keep on waiting for the response and it doesnot return anything and after waiting for 1 min it throws time out(504).

Related

Troubleshoot DynamoDB to Elastic Search

Let's suppose I have a database on DynamoDB, and I am currently using streams and lambda functions to send that data to Elasticsearch.
Here's the thing, supposing the data is saved successfully on DynamoDB, is there a way for me to be 100% sure that the data has been saved on Elasticsearch as well?
Considering I have a function to save that data on DDB is there a way for me communicate with the lambda function triggered by DDB before returning a status code answer, so I can receive confirmation before returning?
I want to do that in order to return ok both from my function and the lambda function at the same time.
This doesn't look like the correct approach for this problem. We generally use DynamoDB Streams + Lambda for operations that are async in nature and when we don't have to communicate the status of this Lambda execution to the client.
So I suggest the following two approaches that are the closest to what you are trying to achieve -
Make the operation completely synchronous. i.e., do the DynamoDB insert and ElasticSearch insert in the same call (without any Ddb Stream and Lambda triggers). This will ensure that you return the correct status of both writes to the client. Also, in case the ES insert fails, you have an option to revert the Ddb write and then return the complete status as failed.
The first approach obviously adds to the Latency of the function. So you can continue with your original approach, but let the client know about it. It will work as follows -
Client calls your API.
API inserts record into Ddb and returns to the client.
The client receives the status and displays a message to the user that their request is being processed.
The client then starts polling for the status of the ES insert via another API.
Meanwhile, the Ddb stream triggers the ES insert Lambda fn and completes the ES write.
The poller on the client comes to know about the successful insert into ES and displays a final success message to the user.

Writing json to AWS S3 from AWS Lambda

I am trying to write a response to AWS S3 as a new file each time.
Below is the code I am using
s3 = boto3.resource('s3', region_name=region_name)
s3_obj = s3.Object(s3_bucket, f'/{folder}/{file_name}.json')
resp_ = s3_obj.put(Body=json.dumps(response_json).encode('UTF-8'))
I can see that I get a 200 response and the file on the directory as well. But it also produces the below exception :
[DEBUG] 2020-10-13T08:29:10.828Z. Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2cf2fdfe123>>
My code throws 500 Exception even though it works. I have other business logic as part of the lambda and things work just fine as the write to S3 operation is at the last. Any help would be appreciated.
The Key (filename) of an Amazon S3 object should not start with a slash (/).

500 Internal error while saving data into Firestore

I have a following process to load documents into Firestore:
Upload document(Its a JSON file) into GCS bucket,
Trigger Cloud Function when document get uploaded into bucket and save uploaded document into Firestore
I am using below code for saving data into firestore
#Save document in firestore
collection = db.collection(u'my_collection')
try:
collection.document(u'' + file_name + '').set(data)
print('Data saved successfully with document id {}'.format(file_name))
except Exception as e:
print('Exception occurred while saving data into firestore.', e)
Problem arise when I uploaded large number of files (1000-2000) into the bucket simultaneously. Few of documents saved successfully but for few of them got below error.
Exception occurred while saving data into firestore. 500 An internal error occurred.
Edit 1: Above error occurs when calling set() method
What is the way to diagnose why this occurred? If there is quota or limit issue or anything else?
Any suggestions would be of great help. Thank you.

Lambda which reads jpg/vector files from S3 and processes them using graphicsmagick

We have a lambda which reads jpg/vector files from S3 and processes them using graphicsmagick.
This lambda was working fine till today. But since today morning we are getting errors while processing vector images using grahicsmagick.
"Error: Command failed: identify: unable to load module /usr/lib64/ImageMagick-6.7.8/modules-Q16/coders/ps.la': file not found # error/module.c/OpenModule/1278.
identify: no decode delegate for this image format/tmp/magick-E-IdkwuE' # error/constitute.c/ReadImage/544."
The above error is occurring for certain .eps files (vector) while using the identify function of the gm module.
Could you please share your insights on this.
Please let us know whether any backend changes have gone through with the aws end for Imagemagick module recently which might have had an affect on this lambda.

boto3 s3 connection error: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation

I'm using the boto3 package to connect from outside an s3 cluster (i.e. the script is currently not being run within the AWS 'cloud', but from my MBP connecting to the relevant cluster). My code:
s3 = boto3.resource(
"s3",
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
)
bucket = s3.Bucket(self.settings['S3']['bucket_test'])
for bucket_in_all in boto3.resource('s3').buckets.all():
if bucket_in_all.name == self.settings['S3']['bucket_test']:
print ("Bucket {} verified".format(self.settings['S3']['bucket_test']))
Now I'm receiving this error message:
botocore.exceptions.ClientError: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation
I'm aware of the sequence of how the aws credentials are checked, and tried different permutations of my environment variables and ~/.aws/credentials, and know that the credentials as per my .py script should override, however I'm still seeing this SignatureDoesNotMatch error message. Any ideas where I may be going wrong? I've also tried:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
...however I also see the same error traceback.
Actually, this was partly answered by #John Rotenstein and #bdcloud nevertheless I need to be more specific...
The following code in my case was not necessary and causing the error message:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
The credentials now stored in self.settings mirror the ~/.aws/credentials. Weirdly (and like last week where the reverse happened), I now have access. It could be that a simple reboot of my laptop meant that my new credentials (since I updated these again yesterday) in ~/.aws/credentials were then 'accepted'.

Resources