How to set a profile on an aws client - python-3.x

I'm trying to create an AWS client for IOT following this article: How can I publish to a MQTT topic in a Amazon AWS Lambda function?
client = boto3.client('iot-data', region_name='us-east-1')
However I need to set a profile so that boto3 picks the correct credentials from my ~/.aws/credentials file.
The articles that describe how to do this (How to choose an AWS profile when using boto3 to connect to CloudFront) use Session instead of creating a client. However iot-data is not a "resource" that you can get from Session.
boto_session = boto3.Session(profile_name='my-profile')
boto_client = boto_session.resource('iot-data', region_name='us-west-1')
When I try the above I get the error:
Consider using a boto3.client('iot-data') instead of a resource for 'iot-data'
And we've achieved full catch-22 status. How can I get an appropriate IOT client using an AWS profile?

IoTDataPlane does not have resource. You can only use client with the IoTDataPlane:
boto_session.client('iot-data', region_name='us-west-1')

Related

NodeJS Aws sdk, can't unset credentials

I have a NodeJS application that runs on an EC2 instance that serves API to my customers. EC2 instance have a Instance Role that grants the minimum permissions for the application to access services it needs ( i need sqs, s3 Read and write, and ses ). One particular endpoint in my api is for creating a signed url, in order to be able to access s3 files, and to create the signed url i use an IAM user with only s3 read access to that Bucket.
My issue is that, whenever that endpoint is called the AWS credentials are set using
const awsConfig = {
region,
accessKeyId: ${keyofreadonlyuser},
secretAccessKey: ${secretofreadonlyuser},
};
AWS.config.update(awsConfig);
This way, all subsequent calls to aws sdk will use that credentials resulting in a Access Denied error.
I've tried to set accessKeyId: null, secretAccessKey:null and than call AWS.config.update, but the credentials are not cleared.
What is the best way to handle situations like that ?
I would recommend that instead of updating the default config, you instead use two boto3 sessions objects:
the default, implicitly-created session, that's associated with the assumed IAM role
an explicitly-created session, that's associated with the IAM user credentials
Specifically for the 2nd use case, pass the IAM user credentials to the session constructor.

Upload a file from form in S3 bucket using boto3 and handler is created in lambda

I want to upload image , audio files of small size from form to the S3 using postman for test. I successfully uploaded file in AWS S3 bucket from my application running on my local machine. Following is the part of the code I used for file uploading .
import boto3
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
async def save_file_static_folder(file, endpoint, user_id):
_, ext = os.path.splitext(file.filename)
raw_file_name = f'{uuid.uuid4().hex}{ext}'
# Save image file in folder
if ext.lower() in image_file_extension:
relative_file_folder =user_id+'/'+endpoint
contents = await file.read()
try:
response = s3_client.put_object(Bucket = S3_BUCKET_NAME,Key = (relative_file_folder+'/'+raw_file_name),Body = contents)
except:
return FileEnum.ERROR_ON_INSERT
I called this function from another endpoint and form data (e.g. name, date of birth and other details) are successfully saved in Mongodb database and files are uploaded in S3 bucket.
This app is using fastapi and files are uploaded in S3 bucket while deploying this app in local machine.
Same app is delpoyed in AWS lambda and S3 bucket as storage. For handling whole app , following is added in endpoint file.
handler = Mangum(app)
After deploying app in AWS creating lambda function from root user account of AWS, files didnot get uploaded in S3 bucket.
If I didnot provide files during form then the AWS API endpoint successfully works. Form data gets stored in MongoDB database (Mongodb atlas) and app works fine hosted using Lambda.
App deployed using Lambda function works successfully except file uploads in form. FOr local machine, file uploads in S3 get success.
EDIT
While tracing in Cloudwatch I got following error
exception An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.
I checked AWS Access Key Id and secret key many times and they are correct and root user credentials are kept.
It looks like you have configured your Lambda function with an execution IAM role, but you are overriding the AWS credentials supplied to the boto3 SDK here:
s3_client = boto3.client('s3',aws_access_key_id =AWS_ACCESS_KEY_ID,aws_secret_access_key = AWS_SECRET_ACCESS_KEY,)
You don't need to provide credentials explicitly because the boto3 SDK (and all language SDKs) will automatically retrieve credentials dynamically for you. So, ensure that your Lambda function is configured with the correct IAM role, and then change your code as follows:
s3_client = boto3.client('s3')
As an aside, you indicated that you may be using AWS root credentials. It's generally a best security practice in AWS to not use root credentials. Instead, create IAM roles and IAM users.
We strongly recommend that you do not use the root user for your everyday tasks, even the administrative ones. Instead, adhere to the best practice of using the root user only to create your first IAM user. Then securely lock away the root user credentials and use them to perform only a few account and service management tasks.

How to authenticate with tokens in Nodejs to a private bucket in Cloud Storage

Usually in Python what I do, I get the application default credentials, I get the access token then I refresh it to be able to authenticate to a private environment.
Code in Python:
# getting the credentials and project details for gcp project
credentials, your_project_id = google.auth.default(scopes=["https://www.googleapis.com/auth/cloud-platform"])
#getting request object
auth_req = google.auth.transport.requests.Request();
print(f"Checking Authentication : {credentials.valid}")
print('Refreshing token ....')
credentials.refresh(auth_req)
#check for valid credentials
print(f"Checking Authentication : {credentials.valid}")
access_token = credentials.token
credentials = google.oauth2.credentials.Credentials(access_token);
storage_client = storage.Client(project='itg-ri-consumerloop-gbl-ww-dv',credentials=credentials)
I am entirely new to NodeJS, and I am trying to make the same thing.
My goal later is to create an app engine application that would expose an image that is found in a private bucket, so credentials are a must.
How it is done?
For authentication, you could rely on the default application credentials that are present within the GCP platform (GAE, Cloud Functions, VM, etc.). Then you could just run the following piece of code from the documentation:
const {Storage} = require('#google-cloud/storage');
const storage = new Storage();
const bucket = storage.bucket('albums');
const file = bucket.file('my-existing-file.png');
In most circumstances, there is no need to explicitly use authentication packages since they are already executed underneath the google-cloud/storage package in Nodejs. The same holds for the google-cloud-storage package in Python. It could help to look at the source code of both packages on Github. For me, this really helped to understand the authentication mechanism.
When I develop code on my own laptop, that interacts with google cloud storage, I first tell the gcloud SDK what my credentials are and on which GCP project I am working. I use the following commands for this:
gcloud config set project [PROJECT_ID]
gcloud auth application-default login
You could also set DEFAULT_APPLICATION_CREDENTIALS as an environment variable that points to a credentials file. Then within your code, you could pass the project name when initializing the client. This could be helpful if you are running your code outside of GCP on another server for example.

What and how to pass credential using using Python Client Library for gcp compute API

I want to get list of all instances in a project using python google client api google-api-python-client==1.7.11
Am trying to connect using method googleapiclient.discovery.build this method required credentials as argument
I read documentation but did not get crdential format and which credential it requires
Can anyone explain what credentials and how to pass to make gcp connection
The credentials that you need are called "Service Account JSON Key File". These are created in the Google Cloud Console under IAM & Admin / Service Accounts. Create a service account and download the key file. In the example below this is service-account.json.
Example code that uses a service account:
from googleapiclient import discovery
from google.oauth2 import service_account
scopes = ['https://www.googleapis.com/auth/cloud-platform']
sa_file = 'service-account.json'
zone = 'us-central1-a'
project_id = 'my_project_id' # Project ID, not Project Name
credentials = service_account.Credentials.from_service_account_file(sa_file, scopes=scopes)
# Create the Cloud Compute Engine service object
service = discovery.build('compute', 'v1', credentials=credentials)
request = service.instances().list(project=project_id, zone=zone)
while request is not None:
response = request.execute()
for instance in response['items']:
# TODO: Change code below to process each `instance` resource:
print(instance)
request = service.instances().list_next(previous_request=request, previous_response=response)
Application default credentials are provided in Google API client libraries automatically. There you can find example using python, also check this documentation Setting Up Authentication for Server to Server Production Applications.
According to GCP most recent documentation:
we recommend you use Google Cloud Client Libraries for your
application. Google Cloud Client Libraries use a library called
Application Default Credentials (ADC) to automatically find your
service account credentials
In case you still want to set it manaully, you could, first create a service account and give all necessary permissions:
# A name for the service account you are about to create:
export SERVICE_ACCOUNT_NAME=your-service-account-name
# Create service account:
gcloud iam service-accounts create ${SERVICE_ACCOUNT_NAME} --display-name="Service Account for ai-platform-samples repo"
# Grant the required roles:
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com --role roles/ml.developer
gcloud projects add-iam-policy-binding ${PROJECT_ID} --member serviceAccount:${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com --role roles/storage.objectAdmin
# Download the service account key and store it in a file specified by GOOGLE_APPLICATION_CREDENTIALS:
gcloud iam service-accounts keys create ${GOOGLE_APPLICATION_CREDENTIALS} --iam-account ${SERVICE_ACCOUNT_NAME}#${PROJECT_ID}.iam.gserviceaccount.com
Once it's done check whether the ADC path has been set properly by checking:
echo $GOOGLE_APPLICATION_CREDENTIALS
Having set the ADC path, you don't need to import from code the service access key, which undesirable, so the code looks as follows:
service = googleapiclient.discovery.build(<API>, <version>,cache_discovery=False)

AWS Lambda gateway API gives error message

I have created one API endpoint for lambda function, as - https://XXXXXXXXX.execute-api.us-east-1.amazonaws.com/XXXX/XXXXXXXXXXXX/ which is GET method.
While calling that endpoint from postman it is giving me
{
"message": "'XXXXXXXXX3LPDGPBF33Q:XXXXXXXXXXBLh219REWwTsNMyyyfbucW8MuM7' not a valid key=value pair (missing equal-sign) in Authorization header: 'AWS XXXXXXXXX3LPDGPBF33Q:XXXXXXXXXXBLh219REWwTsNMyyyfbucW8MuM7'."
}
This is a screenshot of the Amazon Lambda Upload Site: http://i.stack.imgur.com/mwJ3w.png
I have Access Key Id & Secret Access Key for IAM user. I used it all but no luck. Can anyone suggest tweak about this.
If you're using the latest version of Postman, you can generate the SigV4 signature automatically. The region should correspond to your API region (i.e. "us-east-1") and the service name should be "execute-api"
This is not a solution but it has helped me more than once:
Double-check that you are actually hitting an existing endpoint! Especially if you're working with AWS. AWS will return this error if you don't have the correct handler set up in your Lambda or if your API Gateway is not configured to serve this resource/verb/etc.

Resources