How to access existing Amazon DynamoDB Table? - python-3.x

General Problem
I have followed this tutorial on creating and accessing an Amazon DynamoDB from an Android application, and then adapted it for use in an app I am writing. It works great despite the difficulties I faced getting it up and running. However, I would also like to be able to access the database using a python script running on my Raspberry Pi.
I have found this tutorial, but it seems to only describe how to interact with a local DynamoDB table.
Specific Problem
The following code connects and writes an item to a DynamoDB table. I can't find any sort of endpoint URL for my Amazon DynamoDB, only the ARN, and there is no passing of a password or username as I use in my App.
# Helper class to convert a DynamoDB item to JSON.
class DecimalEncoder(json.JSONEncoder):
def default(self, o):
if isinstance(o, decimal.Decimal):
if o % 1 > 0:
return float(o)
else:
return int(o)
return super(DecimalEncoder, self).default(o)
dynamodb = boto3.resource('dynamodb', region_name='us-west-2', endpoint_url="http://localhost:8000")
table = dynamodb.Table('Movies')
title = "The Big New Movie"
year = 2015
response = table.put_item(
Item={
'year': year,
'title': title,
'info': {
'plot':"Nothing happens at all.",
'rating': decimal.Decimal(0)
}
}
)
I have searched for any sort of instructions for connecting to an Amazon DynamoDB instance, but everything I have found describes a local table. If anyone can give advice for the specific issue or recommend a tutorial to that effect, I would appreciate it immensely.

Change
dynamodb = boto3.resource('dynamodb',endpoint_url="http://localhost:8000")
To
dynamodb = boto3.resource('dynamodb',region_name='REGION')
Where REGION is the name of your dynamodb region, such as 'us-west-2'. It will then connect to AWS DynamoDB instance.
EDIT: If you haven't do so already, you will need to setup your AWS Credentials. There are several options for this. One option is to use environment variables
Boto3 will check these environment variables for credentials:
AWS_ACCESS_KEY_ID The access key for your AWS account.
AWS_SECRET_ACCESS_KEY The secret key for your AWS account.

Related

How to add new key-value pair in secrets manager without impacting existing key and values in aws CDK

I have created aws stack in python which create new secret in secrets manager. When I execute the code, stack created successfully and given secret and all provided keys-values listed successfully. Below is the code.
templated_secret = asm.Secret(self, "abzzzz11",
description="ddddd",secret_name="hahahah",
generate_secret_string=asm.SecretStringGenerator(
secret_string_template=json.dumps({"username1": "", "password1": "","password2": "hello-world-prod2"}),
generate_string_key="qwe"
)
)
I have below two question:
Question 1: After creating secrets, values will be changed against keys for dev or stage environment. Now a new Key and value needs to be added for same secret. But after adding new values in my code when I execute the stack then it replace all the values. So is it possible that system only add those values which does not exist on aws secret manager?
Question 2: I am unable to understand the purpose/use of generate_string_key in above code. I read the aws documentation but unable to understand the purpose of this field. So please help me to understand the usage of this field.

Getting 'Could not find stored procedure' error calling SQL Server stored procedure in Node JS from AWS RDS database

We always normally work in Azure where I write around 200 stored procedures a year in their SQL Server database.
We had to create a SQL Server database in AWS-RDS and still call it in our Node APIs like usual. I was able to quickly and easily set up the AWS DB in SQL Server Management Studio so I do know the credentials.
I created several tables and several stored procedures with no problems and tested to make sure they worked there. When I called them like I normally do in Node, I found I was getting an error
Could not find stored procedure
I went through forums all over but most of the data pertains to MySQL instead of SQL Server, and after trying everything I saw in the forums have not been able to complete what should be a very simple process. I would imagine there is some simple thing I missed, but after 2 days it is time for some fresh ideas.
I am setting up the credentials like this:
var awsConnection = {
host : process.env.RDS_HOSTNAME,
user : process.env.RDS_USERNAME,
password : process.env.RDS_PASSWORD,
port : process.env.RDS_PORT
};
I am using the endpoint provided by AWS for the host, the username and password I use to login to SQL Server Management Tool (which works). The port number is the one specified by AWS (1433 - the default for SQL Server).
I call it in my api like this:
await sql.connect(connectionAWS).then(pool => {
// Stored procedure
console.log("awsConnection INSIDE: " + JSON.stringify(awsConnection));
return pool.request()
.input('repId', sql.VARCHAR(40), repObj.RepID)
.execute('UserExistsBD');
}).then(async result => { ...
I added the console.log to see if we were getting past the login and it appears that we do. I also used Telnet to make sure the endpoint/port combo work and they do. I also checked AWS to make sure the Subnets, Route tables, and gateways were good and to make sure my IP Address was white listed. Any ideas would be very much appreciated!

Building a jump-table for boto3 clients/methods

I'm trying to build a jumptable of API methods for a variety of boto3 clients, so I can pass an AWS service name and a authn/authz low-level boto3 client to my utility code and execute the appropriate method to get a list of resources from the AWS service.
I'm not willing to hand-code and maintain a massive if..elif..else statement with >100 clauses.
I have a dictionary of service names (keys) and API method names (values), like this:
jumpTable = { 'lambda' : 'list_functions' }
I'm passed the service name ('lambda') and a boto3 client object ('client') already connected to the right service in the region and account I need.
I use the dict's get() to find the method name for the service, and then use a standard getattr() on the boto3 client object to get a method reference for the desired API call (which of course vary from service to service):
`apimethod = jumpTable.get(service)`
`methodptr = getattr(client, apimethod)`
Sanity-checking says I've got a "botocore.client.Lambda object" for 'client' (that looks OK to me) and a "bound method ClientCreator._create_api_method.._api_call of <botocore.client.Lambda" for the methodptr which reports itself as of type 'method'.
None of the API methods I'm using require arguments. When I invoke it directly:
'response = methodptr()'
it returns a boto3 ClientError, while invoking at through the client:
response = client.methodptr()
returns a boto3 AttributeErrror.
Where am I going wrong here?
I'm locked into boto3, Python3, AWS and have to talk to 100s of AWS services, each of which has a different API method that provides the data I need to gather. To an old C coder, a jump-table seems obvious; a more Pythonic approach would be welcome...
The following works for me:
client = boto3.Session().client("lambda")
methodptr = getattr(client, apimethod)
methodptr()
Note that the boto3.Session() part is required. When calling boto3.client(..) directly, I get a 'UnrecognizedClientException' exception.

How to connect Google Datastore from a script in Python 3

We want to do some stuff with the data that is in the Google Datastore. We have a database already, We would like to use Python 3 to handle the data and make queries from a script on our developing machines. Which would be the easiest way to accomplish what we need?
From the Official Documentation:
You will need to install the Cloud Datastore client library for Python:
pip install --upgrade google-cloud-datastore
Set up authentication by creating a service account and setting an environment variable. It will be easier if you see it, please take a look at the official documentation for more info about this. You can perform this step by either using the GCP console or command line.
Then you will be able to connect to your Cloud Datastore client and use it, as in the example below:
# Imports the Google Cloud client library
from google.cloud import datastore
# Instantiates a client
datastore_client = datastore.Client()
# The kind for the new entity
kind = 'Task'
# The name/ID for the new entity
name = 'sampletask1'
# The Cloud Datastore key for the new entity
task_key = datastore_client.key(kind, name)
# Prepares the new entity
task = datastore.Entity(key=task_key)
task['description'] = 'Buy milk'
# Saves the entity
datastore_client.put(task)
print('Saved {}: {}'.format(task.key.name, task['description']))
As #JohnHanley mentioned, you will find a good example on this Bookshelf app tutorial that uses Cloud Datastore to store its persistent data and metadata for books.
You can create a service account and download the credentials as JSON and then set an environment variable called GOOGLE_APPLICATION_CREDENTIALS pointing to the json file. You can see the details at the link below.
https://googleapis.dev/python/google-api-core/latest/auth.html

Unable to create s3 bucket using boto3

I'm trying to create a aws bucket from python3 using boto3. create_bucket() is the method I use. Still I get an error botocore.errorfactory.BucketAlreadyExists
MY CODE:
import boto3
ACCESS_KEY = 'theaccesskey'
SECRET_KEY = 'thesecretkey'
S3 = boto3.client('s3',
aws_access_key_id = ACCESS_KEY,
aws_secret_access_key = SECRET_KEY)
response = S3.create_bucket(Bucket='mynewbucket',
CreateBucketConfiguration={'LocationConstraint':'ap-south-1'})
ERROR:
botocore.errorfactory.BucketAlreadyExists: An error occurred (BucketAlreadyExists)
when calling the CreateBucket operation: The requested bucket name is not available.
The bucket namespace is shared by all users of the system.
Please select a different name and try again.
However, the Bucket does not exist and it still failed to create the bucket.
EDIT
I found the reason from the link and I also posted that in answers in-order to help someone.
I got it after reading few articles on-line. The bucket name should be globally unique once it satifies that condition it works as I expect.
I share this to help someone wonders just like me
Reference

Resources