Boto/Boto3: bucket.get_key(): 403 Forbidden - python-3.x

I am trying to connect to AWS S3 without using credentials. I attached the Role S3 fullaccess for my instance to check if the file exists or not; if it is not, upload it into S3 bucket. If is isn't I want to check md5sum and if it is different from the local local file, upload a new version.
I try to get key of file in S3 via boto by using bucket.get_key('mykey') and get this error:
File "/usr/local/lib/python3.5/dist-packages/boto/s3/bucket.py", line 193, in get_key key, resp = self._get_key_internal(key_name, headers, query_args_l)
File "/usr/local/lib/python3.5/dist-packages/boto/s3/bucket.py", line 232, in _get_key_internal response.status, response.reason, '') boto.exception.S3ResponseError: S3ResponseError: 403 Forbidden"
I searched and added "validate=False" when getting the bucket, but this didn't resolve my issue. I'm using Python 3.5, boto and boto3.
Here is my code:
import boto3
import boto
from boto import ec2
import os
import boto.s3.connection
from boto.s3.key import Key
bucket_name = "abc"
conn = boto.s3.connect_to_region('us-west-1', is_secure = True, calling_format = boto.s3.connection.OrdinaryCallingFormat())
bucket = conn.get_bucket(bucket_name, validate=False)
key = bucket.get_key('xxxx')
print (key)
I don't know why I get that error. Please help me to clearly this problem. Thanks!
Updated
I've just find root cause this problem. Cause by "The difference between the request time and the current time is too large".
Then it didn't get key of file from S3 bucket. I updated ntp service to synchronize local time and UTC time. It run success.
Synchronization time by:
sudo service ntp stop
sudo ntpdate -s 0.ubuntu.pool.ntp.org
sudo service ntp start
Thanks!

IAM role is the last in the order of search. I bet you have the credentials stored before the search order which doesn't have full S3 access. Check Configuration Settings and Precedence and make sure no credentials is present so that IAM role is used to fetch the credentials. Though it is for CLI, it applies to scripts too.
The AWS CLI looks for credentials and configuration settings in the following order:
Command line options – region, output format and profile can be specified as command options to override default settings.
Environment variables – AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN.
The AWS credentials file – located at ~/.aws/credentials on Linux, macOS, or Unix, or at C:\Users\USERNAME .aws\credentials on Windows. This file can contain multiple named profiles in addition to a default profile.
The CLI configuration file – typically located at ~/.aws/config on Linux, macOS, or Unix, or at C:\Users\USERNAME .aws\config on Windows. This file can contain a default profile, named profiles, and CLI specific configuration parameters for each.
Container credentials – provided by Amazon Elastic Container Service on container instances when you assign a role to your task.
Instance profile credentials – these credentials can be used on EC2 instances with an assigned instance role, and are delivered through the Amazon EC2 metadata service.

Related

How to start an ec2 instance using sqs and trigger a python script inside the instance

I have a python script which takes video and converts it to a series of small panoramas. Now, theres an S3 bucket where a video will be uploaded (mp4). I need this file to be sent to the ec2 instance whenever it is uploaded.
This is the flow:
Upload video file to S3.
This should trigger EC2 instance to start.
Once it is running, I want the file to be copied to a particular directory inside the instance.
After this, I want the py file (panorama.py) to start running and read the video file from the directory and process it and then generate output images.
These output images need to be uploaded to a new bucket or the same bucket which was initially used.
Instance should terminate after this.
What I have done so far is, I have created a lambda function that is triggered whenever an object is added to that bucket. It stores the name of the file and the path. I had read that I now need to use an SQS queue and pass this name and path metadata to the queue and use the SQS to trigger the instance. And then, I need to run a script in the instance which pulls the metadata from the SQS queue and then use that to copy the file(mp4) from bucket to the instance.
How do i do this?
I am new to AWS and hence do not know much about SQS or how to transfer metadata and automatically trigger instance, etc.
Your wording is a bit confusing. It says that you want to "start" an instance (which suggests that the instance already exists), but then it says that it wants to "terminate" an instance (which would permanently remove it). I am going to assume that you actually intend to "stop" the instance so that it can be used again.
You can put a shell script in the /var/lib/cloud/scripts/per-boot/ directory. This script will then be executed every time the instance starts.
When the instance has finished processing, it can call sudo shutdown now -h to turn off the instance. (Alternatively, it can tell EC2 to stop the instance, but using shutdown is easier.)
For details, see: Auto-Stop EC2 instances when they finish a task - DEV Community
I tried to answer in the most minimalist way, there are many points below that can be further improved. I think below is still quite some as you mentioned you are new to AWS.
Using AWS Lambda with Amazon S3
Amazon S3 can send an event to a Lambda function when an object is created or deleted. You configure notification settings on a bucket, and grant Amazon S3 permission to invoke a function on the function's resource-based permissions policy.
When the object uploaded it will trigger the lambda function. Which creates the instance with ec2 user data Run commands on your Linux instance at launch.
For the ec2 instance make you provide the necessary permissions via Using instance profiles for download and uploading the objects.
user data has a script that does the rest of the work which you need for your workflow
Download the s3 object, you can pass the name and s3 bucket name in the same script
Once #1 finished, start the panorama.py which processes the videos.
In the next step you can start uploading the objects to the S3 bucket.
Eventually terminating the instance will be a bit tricky which you can achieve Change the instance initiated shutdown behavior
OR
you can use below method for terminating the instnace, but in that case your ec2 instance profile must have access to terminate the instance.
ec2-terminate-instances $(curl -s http://169.254.169.254/latest/meta-data/instance-id)
You can wrap the above steps into a shell script inside the userdata.
Lambda ec2 start instance:
def launch_instance(EC2, config, user_data):
ec2_response = EC2.run_instances(
ImageId=config['ami'], # ami-0123b531fc646552f
InstanceType=config['instance_type'],
KeyName=config['ssh_key_name'],
MinCount=1,
MaxCount=1,
SecurityGroupIds=config['security_group_ids'],
TagSpecifications=tag_specs,
# UserData=base64.b64encode(user_data).decode("ascii")
UserData=user_data
)
new_instance_resp = ec2_response['Instances'][0]
instance_id = new_instance_resp['InstanceId']
print(f"[DEBUG] Full ec2 instance response data for '{instance_id}': {new_instance_resp}")
return (instance_id, new_instance_resp)
Upload file to S3 -> Launch EC2 instance

Can't sync s3 with ec2 folder from aws lambda

I am trying to automate data processing using AWS. I have setup an AWS lambda function in python that:
Gets triggered by an S3 PUT event
Ssh into an EC2 instance using paramiko layer
Copy the new objects from the bucket into some folder in the instance, unzip the file inside the instance and run a python script that cleans the csv files.
The problem is the aws cli call to sync s3 bucket with ec2 folder is not working, but when I manually ssh into the ec2 instance and runn the command it works.My aws-cli is configured with my access_keys and the ec2 has an s3 role that allows it full access.
import boto3
import time
import paramiko
def lambda_handler(event, context):
#create a low level client representing s3
s3 = boto3.client('s3')
ec2 = boto3.resource('ec2', region_name='eu-west-a')
instance_id = 'i-058456c79fjcde676'
instance = ec2.Instance(instance_id)
------------------------------------------------------'''
#start instance
instance.start()
#allow some time for the instance to start
time.sleep(30)
# Print few details of the instance
print("Instance id - ", instance.id)
print("Instance public IP - ", instance.public_ip_address)
print("Instance private IP - ", instance.private_ip_address)
print("Public dns name - ", instance.public_dns_name)
print("----------------------------------------------------")
print('Downloading pem file')
s3.download_file('some_bucket', 'some_pem_file.pem', '/tmp/some_pem_file.pem')
# Allowing few seconds for the download to complete
print('waiting for instance to start')
time.sleep(30)
print('sshing to instsnce')
ssh = paramiko.SSHClient()
ssh.set_missing_host_key_policy(paramiko.AutoAddPolicy())
privkey = paramiko.RSAKey.from_private_key_file('/tmp/some_pem_file.pem')
# username is most likely 'ec2-user' or 'root' or 'ubuntu'
# depending upon yor ec2 AMI
#s3_path = "s3://some_bucket/" + object_name
ssh.connect(
instance.public_dns_name, username='ubuntu', pkey=privkey)
print('inside machine...running commands')
stdin, stdout, stderr = ssh.exec_command('aws s3 sync s3://some_bucket/ ~/ec2_folder;\
bash ~/ec2_folder/unzip.sh; python3 ~/ec2_folder/process.py;')
stdin.flush()
data = stdout.read().splitlines()
for line in data:
print(line)
print('done, closing ssh session')
ssh.close()
# Stop the instance
instance.stop()
return('Triggered')
The use of an SSH tool is somewhat unusual.
Here are a few more 'cloud-friendly' options you might consider.
Systems Manager Run Command
The AWS Systems Manager Run Command allows you to execute a script on an Amazon EC2 instance (and, in fact, on any computer that is running the Systems Manager agent). It can even run the command on many (hundreds!) of instances/computers at the same time, keeping track of the success of each execution.
This means that, instead of connecting to the instance via SSH, the Lambda function could call the Run Command via an API call and Systems Manager would run the code on the instance.
Pull, Don't Push
Rather than 'pushing' the work to the instance, the instance could 'pull the work':
Configure the Amazon S3 event to push a message into an Amazon SQS queue
Code on the instance could be regularly polling the SQS queue
When it finds a message on the queue, it runs a script that downloads the file (the bucket and key are passed in the message) and then runs the processing script
Trigger via HTTP
The instance could run a web server, listening for a message.
Configure the Amazon S3 event to push a message into an Amazon SNS topic
Add the instance's URL as an HTTP subscription to the SNS topic
When a message is sent to SNS, it forwards it to the instance's URL
Code in the web server then triggers your script
This answer is based on the additional information that you wish to shutdown the EC2 instance between executions.
I would recommend:
Amazon S3 Event triggers Lambda function
Lambda function starts the instance, passing filename information via the User Data field (it can be used to pass data, not just scripts). The Lambda function can then immediately exit (which is more cost-effective than waiting for the job to complete)
Put your processing script in the /var/lib/cloud/scripts/per-boot/ directory, which will cause it to run every time the instance is started (every time, not just the first time)
The script can extract the User Data passed from the Lambda function by retrieving curl http://169.254.169.254/latest/user-data/, so that it knows the filename from S3
The script then processes the file
The script then runs sudo shutdown now -h to stop the instance
If there is a chance that another file might come while the instance is already processing a file, then I would slightly change the process:
Rather than passing the filename via User Data, put it into an Amazon SQS queue
When the instance is started, it should retrieve the details from the SQS queue
After the file is processed, it should check the queue again to see if another message has been sent
If yes, the process the file and repeat
If no, shutdown itself
By the way, things can sometimes go wrong, so it's worth putting a 'circuit breaker' in the script so that it does not shutdown the instance if you want to debug things. This could be a matter of passing a flag, or even adding a tag to the instance, which is checked before calling the shutdown command.

Read CSV file from AWS S3

I have an EC2 instance running pyspark and I'm able to connect to it (ssh) and run interactive code within a Jupyter Notebook.
I have a S3 bucket with a csv file that I want to read, when I attempt to read it with:
spark = SparkSession.builder.appName('Basics').getOrCreate()
df = spark.read.csv('https://s3.us-east-2.amazonaws.com/bucketname/filename.csv')
Which throws a long Python error message and then something related to:
Py4JJavaError: An error occurred while calling o131.csv.
Specify S3 path along with access key and secret key as following:
's3n://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>#my.bucket/folder/input_data.csv'
Access key-related information can be introduced in the typical username + password manner for URLs. As a rule, the access protocol should be s3a, the successor to s3n (see Technically what is the difference between s3n, s3a and s3?). Putting this together, you get
spark.read.csv("s3a://<AWS_ACCESS_KEY_ID>:<AWS_SECRET_ACCESS_KEY>#bucketname/filename.csv")
As an aside, some Spark execution environments, e.g., Databricks, allow S3 buckets to be mounted as part of the file system. You can do the same when you build a cluster using something like s3fs.

boto3 s3 connection error: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation

I'm using the boto3 package to connect from outside an s3 cluster (i.e. the script is currently not being run within the AWS 'cloud', but from my MBP connecting to the relevant cluster). My code:
s3 = boto3.resource(
"s3",
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
)
bucket = s3.Bucket(self.settings['S3']['bucket_test'])
for bucket_in_all in boto3.resource('s3').buckets.all():
if bucket_in_all.name == self.settings['S3']['bucket_test']:
print ("Bucket {} verified".format(self.settings['S3']['bucket_test']))
Now I'm receiving this error message:
botocore.exceptions.ClientError: An error occurred (SignatureDoesNotMatch) when calling the ListBuckets operation
I'm aware of the sequence of how the aws credentials are checked, and tried different permutations of my environment variables and ~/.aws/credentials, and know that the credentials as per my .py script should override, however I'm still seeing this SignatureDoesNotMatch error message. Any ideas where I may be going wrong? I've also tried:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
s3 = boto3.resource('s3')
for bucket in s3.buckets.all():
print(bucket.name)
...however I also see the same error traceback.
Actually, this was partly answered by #John Rotenstein and #bdcloud nevertheless I need to be more specific...
The following code in my case was not necessary and causing the error message:
# Create a session
session = boto3.session.Session(
aws_access_key_id=self.settings['CREDENTIALS']['aws_access_key_id'],
aws_secret_access_key=self.settings['CREDENTIALS']['aws_secret_access_key'],
aws_session_token=self.settings['CREDENTIALS']['session_token'],
region_name=self.settings['CREDENTIALS']['region_name']
)
The credentials now stored in self.settings mirror the ~/.aws/credentials. Weirdly (and like last week where the reverse happened), I now have access. It could be that a simple reboot of my laptop meant that my new credentials (since I updated these again yesterday) in ~/.aws/credentials were then 'accepted'.

Invalid endpoint on aws s3 ls

I've just installed AWS Command Line Interface on Windows 10 (64-bit). I ran 'aws configure' providing the two keys, a region value of us-east-1, and took the default json format. Then when I run 'aws s3 ls' I get the following error:
Invalid endpoint: https://s3..amazonaws.com
It's either not taking my region, or putting two dots where there should be one in the link. My /.aws/config file only has these lines in it:
[default]
region = us-east-1
Any ideas why I get 2 dots in place of my region in the s3 link, causing the invalid endpoint error? Thanks for any assistance.
You could also fix this by setting environment variables in your current session for AWS_ACCESS_KEY_ID, AWS_SECRET_ACCESS_KEY, and AWS_DEFAULT_REGION.
(in this case your error is actually caused by the CLI not finding a value for the region)
I think that's not because of the region because you have already set default to us-east-1 as shown by your /.aws/config file but your output type is not set and double check your access key id and secret access key
it should be like:
also check whether you are able to call other AWS API services such as try to create a dynamodb table using AWS CLI and check whether your IAM user have access permissions to s3 or not.
This can happen if you set AWS_DEFAULT_REGION in your any of .bash_profile or .bashrc
Remove AWS_DEFAULT_REGION in any of those files.
~/.aws/config should be like this
[default]
output = json
region = eu-west-1
Restart your terminal.
You can also just specify the region on the command line itself like below if you didn't want to rely on the default settings in the ~/.aws/config file...
aws --region us-east-1 s3 cp s3://<bucket> <local destination>

Resources