botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the GetObject operation - python-3.x

Even though passing the the correct Access key, Id and token, I am getting an error while running a below code. Anything missing in this code?
import boto3
session = boto3.Session(
region_name='us-east-1',
aws_secret_access_key='XXXX',
aws_access_key_id='YYYY',
aws_session_token= 'ZZZZ')
s3_client = session.client('s3')
response = s3_client.get_object(Bucket = 'dev-bucket-test',
Key='abc.xlsx')
data = response['Body'].read()
print(data)
Error:
botocore.exceptions.ClientError: An error occurred (InvalidAccessKeyId) when calling the GetObject operation: The AWS Access Key Id you provided does not exist in our records.

I would like to suggest a better approach.
There is a credentials file in .aws directory try to put your credentials there under [default] profile and it will help you to make all calls without writing credentials in code.

Related

Writing json to AWS S3 from AWS Lambda

I am trying to write a response to AWS S3 as a new file each time.
Below is the code I am using
s3 = boto3.resource('s3', region_name=region_name)
s3_obj = s3.Object(s3_bucket, f'/{folder}/{file_name}.json')
resp_ = s3_obj.put(Body=json.dumps(response_json).encode('UTF-8'))
I can see that I get a 200 response and the file on the directory as well. But it also produces the below exception :
[DEBUG] 2020-10-13T08:29:10.828Z. Event needs-retry.s3.PutObject: calling handler <bound method S3RegionRedirector.redirect_from_error of <botocore.utils.S3RegionRedirector object at 0x7f2cf2fdfe123>>
My code throws 500 Exception even though it works. I have other business logic as part of the lambda and things work just fine as the write to S3 operation is at the last. Any help would be appreciated.
The Key (filename) of an Amazon S3 object should not start with a slash (/).

Exception: 401 Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential

I've seen a few people over the years facing similar issue but I haven't found much regarding my case.
I have a backend built with python3.
I am using firebase_admin as a library to connect to Firebase Cloud Firestore.
I then commit my code to Github and using Github Actions
I am deploying the docker container to Google Cloud Run.
This all works fine for some time
Some time later throws the following exception: Exception: 401 Request had invalid authentication credentials. Expected OAuth 2 access token, login cookie or other valid authentication credential. See https://developers.google.com/identity/sign-in/web/devconsole-project.
What am I doing wrong?
Should I connect every time I call the function? (move the db initialization in each function call)
Feels like the token expired. Can I refresh it somehow?
Python code:
from firebase_admin import firestore, initialize_app
initialize_app()
db = firestore.client()
def get_info_from_firestore(name: str:
try:
data = db.collection(u'data').where(u'title', u'==', name).stream()
for rating in ratings:
return rating.to_dict()
return None
except Exception as e:
logging.warning(Exception: {e}')
return None
And this file is imported from my root python file that's using Flask.
Edit: One final thing that might help, if I redeploy my container without any changes it all works again.
Likely this is happening due to time drift: time in docker significatly differs realtime. After restart time is in sync but after a while it drifts. Google does not like it. See more info about this WSL/container issue here https://github.com/microsoft/WSL/issues/4245 and here https://github.com/docker/for-win/issues/4526

is ZONE_RESOURCE_POOL_EXHAUSTED a googleapiclient.errors.httpError?

I am using the googleapiclient(Python) APIs like images().get(), images().insert() etc to list images, create VM instances etc
There were several ZONE_RESOURCE_POOL_EXHAUSTED errors from google cloud last month which caused the following exception in my code
Exception: {'errors': [{'code': 'ZONE_RESOURCE_POOL_EXHAUSTED', 'message': "The zone 'projects/<project-name>/zones/us-central1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later."}]}
I want to handle this error in my code by sending a unique error code from my server to the client to retry this request after sometime since the error is transient.
I am not able to reproduce this error willfully for the same reason...that its transient
I checked the googleapi code on GitHub at https://github.com/googleapis/google-api-python-client
but couldn't find ZONE_RESOURCE_POOL_EXHAUSTED
I need to verify if its an exception of type "HttpError" or some other class and then can handle it in my code
I am already handling exception of type googleapiclient.errors.HttpError in my code by printing an error message and raising it as urllib.error.HTTPError (the server sends the code e.resp['status'] for this case to the client)
except HttpError as e:
printf('Failed to create %s: %s\n', instanceName,
e._get_reason())
raise HTTPError(
None, int(e.resp['status']), e._get_reason(), "", None)

boto3 s3 connection error: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation

I'm using the boto3 package to connect from outside an s3 cluster (i.e. the script is currently not being run within the AWS 'cloud', but from my MBP connecting to the relevant cluster). My code:
s3 = boto3.resource(
"s3",
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
)
bucket = s3.Bucket(self.settings['S3']['bucket_test'])
for bucket_in_all in boto3.resource('s3').buckets.all():
if bucket_in_all.name == self.settings['S3']['bucket_test']:
print ("Bucket {} verified".format(self.settings['S3']['bucket_test']))
Now I'm receiving this error message:
botocore.exceptions.ClientError: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation
I'm aware of the sequence of how the aws credentials are checked, and tried different permutations of my environment variables and ~/.aws/credentials, and know that the credentials as per my .py script should override, however I'm still seeing this SignatureDoesNotMatch error message. Any ideas where I may be going wrong? I've also tried:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
...however I also see the same error traceback.
Actually, this was partly answered by #John Rotenstein and #bdcloud nevertheless I need to be more specific...
The following code in my case was not necessary and causing the error message:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
The credentials now stored in self.settings mirror the ~/.aws/credentials. Weirdly (and like last week where the reverse happened), I now have access. It could be that a simple reboot of my laptop meant that my new credentials (since I updated these again yesterday) in ~/.aws/credentials were then 'accepted'.

Boto3 ListObject Forbidden for Admin User

I have been trying to write a small script to download to lambda /tmp all the content of a S3 folder. To do this I need to list all Objects in a specific bucket. Unfortunately I keep getting the following error;
An error occurred (403) when calling the HeadObject operation: Forbidden
Here is how I try to download all the files from a folder:
#initialize S3
try:
s3 = boto3.resource('s3',
aws_access_key_id=os.getenv('S3USERACCESSKEY'),
aws_secret_access_key=os.getenv('S3USERSECRETKEY')
)
s3_client = boto3.client('s3',
aws_access_key_id=os.getenv('S3USERACCESSKEY'),
aws_secret_access_key=os.getenv('S3USERSECRETKEY')
)
except Exception as e:
logger.error("Could not connect to s3 bucket: " + str(e))
#Function to download whole folders from s3
for s3_key in s3_client.list_objects(Bucket=os.getenv('S3BUCKETNAME'))['Contents']:
s3_object = s3_key['Key']
if not s3_object.endswith("/"):
s3_client.download_file('bucket', s3_object, s3_object)
else:
import os
if not os.path.exists(s3_object):
os.makedirs(s3_object)
The access keys above have full admin rights:
EDIT
Still no success after removing my manual keys, here are the right i attached to Lambda:
Here is the actual error from cloudwatch:
The code now looks like so:
#initialize S3
try:
s3 = boto3.resource('s3')
s3_client = boto3.client('s3')
except Exception as e:
[....]
Seems like "Forbidden" might be another issue then permission but I can't find any doc on it.
Make sure the access key belongs to the user with the IAM role that has rights to access the s3 bucket.
If you run from lambda, there's no need to use an access key, just attach the IAM role to lambda
https://docs.aws.amazon.com/lambda/latest/dg/accessing-resources.html
Did you import boto?
Try to run execute only this:
UPDATE
import boto3
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)

Resources