If I scan this way, works fine:
aws dynamodb scan --table-name my_table --select "COUNT" --filter-expression "attribute_type(destination.amount, :v_sub)" --expression-attribute-values file://expression-attribute-values.json
Where destination.amount is the correct path:
But now I want to read from a path like this:
How can I read from sender.custom:codigoCorporativa? I can't do this:
aws dynamodb scan --table-name my_table --select "COUNT" --filter-expression "attribute_type(sender.custom:codigoCorporativa, :v_sub)" --expression-attribute-values file://expression-attribute-values.json
or I'll get this error:
An error occurred (ValidationException) when calling the Scan operation: Invalid FilterExpression: Syntax error; token: ":codigoCorporativa", near: "custom:codigoCorporativa,"
I tried to use:
sender.custom:codigoCorporativa
"sender.custom:codigoCorporativa"
\"sender.custom:codigoCorporativa\"
sender.'custom:codigoCorporativa'
Any idea?
Edit: I've just read in the AWS DynamoDB docs that special characters like # and : should be avoided, but I need to scan now a name with :
Well, I had to use a placeholder:
aws dynamodb scan --table-name my_table --select "COUNT" --filter-expression
"attribute_type(#code, :v_sub)" --expression-attribute-values file://expression-attribute-values.json --expression-
attribute-names '{"#code":"sender.custom:codigoCorporativa"}'
Related
I have a problem about dynamodb updateItem, here is my code
response = dynamoTable.update_item(
Key={
'User': event['User']
},
UpdateExpression="set #T[0].#Year.#Month.#Day.#Hour = :i",
ExpressionAttributeNames = {
"#T" : 'time',
"#Year" : event["Year"],
"#Month" : event["Month"],
"#Day" : event["Day"],
"#Hour" : event["Hour"]
},
ExpressionAttributeValues={
':i': event["Value"]
}
)
i want to storage my daily data to dynamodb
2020-01-02 03:00 value 4
then in my dynamodb will update table like this
time[{'2020':{'01':{'02':{03: 4}}}}]
i can update other hour data (2020-01-02 04 value 3)
but cant update another day data(2020-01-03 03 value 3)
error message:
The document path provided in the update expression is invalid for update
I am using this tutorial to link Rekognition results to a DynamoDB table.
It is giving me this error:
{
"errorMessage": "Unable to get object metadata from S3. Check object key, region and/or access permissions.",
"errorType": "InvalidS3ObjectException",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:48:27)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
]
}
The code used from GitHub is this.
I made sure the region-name is the same for the lambda-bucket and the table.
I am a starter in this, so any help will be appreciated!
Thanks!
Edit:
I made some modifications and now it is giving me this:
{
"errorMessage": "Requested resource not found",
"errorType": "ResourceNotFoundException",
"stackTrace": [
"Request.extractError (/var/runtime/node_modules/aws-sdk/lib/protocol/json.js:48:27)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:105:20)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:77:10)",
"Request.emit (/var/runtime/node_modules/aws-sdk/lib/request.js:683:14)",
"Request.transition (/var/runtime/node_modules/aws-sdk/lib/request.js:22:10)",
"AcceptorStateMachine.runTo (/var/runtime/node_modules/aws-sdk/lib/state_machine.js:14:12)",
"/var/runtime/node_modules/aws-sdk/lib/state_machine.js:26:10",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:38:9)",
"Request.<anonymous> (/var/runtime/node_modules/aws-sdk/lib/request.js:685:12)",
"Request.callListeners (/var/runtime/node_modules/aws-sdk/lib/sequential_executor.js:115:18)"
]
}
The fact that you're seeing ResourceNotFoundException suggests a couple of potential causes:
the Lambda function could not find the DynamoDB table: make sure that you modified config,js to include the name of the DynamoDB table correctly, by setting config.dynamo.tableName = '<your table>'
Rekognition could not read the image from S3: make sure that the image filename is of the form faces.jpg rather than test faces.jpg (which gets escaped to test+faces.jpg)
There are a couple of reasons why this could be happening:
1) The resource definitely does not exist. Triple-check Bucket name, DynamoDB Table name, regions, etc.
2) It's very likely that your function lacks permissions. Check the IAM Role that your Lambda function is using and attach the right policies to it. On this case, your function needs access to S3, DynamoDB and Rekognition. Make sure all of these policies are attached to the IAM role.
I am following the AWS Node.js tutorial, but I cannot get the provided code to work.
Specifically, I am trying to create a “Movies” table on a local instance of DynamoDB and load it with some provided sample data. However, I am receiving ”Cannot do operations on a non-existent table” when I try to create the table, which seems a bit strange.
For my set up, I am running DynamoDB in one console window. The command I am using and output is as follows:
COMPUTERNAME:node-dyna-demo myUserName$ java -Djava.library.path=./dynamodb_local_latest/DynamoDBLocal_lib -jar ./dynamodb_local_latest/DynamoDBLocal.jar -sharedDb
Initializing DynamoDB Local with the following configuration:
Port: 8000
InMemory: false
DbPath: null
SharedDb: true
shouldDelayTransientStatuses: false
CorsParams: *
In a separate console, I am executing the following code:
AWS.config.update({
credentials: {
accessKeyId: ‘testAccessId’,
secretAccessKey: ‘testAccessKey’
},
region: 'us-west-2',
endpoint: 'http://localhost:8000'
})
const dynamodb = new AWS.DynamoDB()
const docClient = new AWS.DynamoDB.DocumentClient()
const dbParams = {
TableName : "Movies",
KeySchema: [ … ],
AttributeDefinitions: [ … ],
ProvisionedThroughput: { … }
}
dynamodb.createTable(dbParams, function(err, data) {
if (err) {
console.error(
'Unable to create table. Error JSON:',
JSON.stringify(err, null, 2)
)
} else {
console.log(
'Created table. Table description JSON:',
JSON.stringify(data, null, 2)
)
}
})
The error I get from execution is:
Unable to create table. Error JSON: {
"message": "Cannot do operations on a non-existent table",
"code": "ResourceNotFoundException",
"time": "2018-01-24T15:56:13.229Z",
"requestId": "c8744331-dd19-4232-bab1-87d03027e7fc",
"statusCode": 400,
"retryable": false,
"retryDelay": 9.419252980728942
}
Does anyone know a possible cause for this exception?
When Dynamodb local is started without -sharedDb flag, there is a possibility for occurrence of this kind of issue.
In your case local dynamodb is started as expected,
java -Djava.library.path=./dynamodb_local_latest/DynamoDBLocal_lib -jar ./dynamodb_local_latest/DynamoDBLocal.jar -sharedDb
Try the following solutions,
Solution 1: Remove the credential info in in AWS config
AWS.config.update({
region: 'us-west-2',
endpoint: 'http://localhost:8000'
})
Reason: If in case dynamodb local is running without -sharedDb flag, it will create new database with respect to that credential.
Solution 2: Creating table using the dynamodb shell (http://localhost:8000/shell/#) and verify whether its created or not using the following command
aws dynamodb list-tables --endpoint-url http://localhost:8000
The first answer gave me a hint for my solution. I am using Java in combination with maven. Hopefully my answer can help others. The AWS region might be conflicting. The Java code points to eu-west-1.
amazonDynamoDB = AmazonDynamoDBClientBuilder
.standard()
.withEndpointConfiguration(
new AwsClientBuilder
.EndpointConfiguration("http://localhost:" +
LocalDbCreationRule.port, "eu-west-1"))
.build();
But my AWS config in my home folder on my local machine on location
~/.aws/config
showed the following:
[default]
region = eu-central-1
I changed the region to eu-west-1, matching the Java code, in my local config and the issue was resolved.
Try to create table with three fields in Dynamo db by using using flask-dynamo got error ""
botocore.exceptions.ClientError
botocore.exceptions.ClientError: An error occurred (ValidationException) when calling the CreateTable operation: The number of attributes in key schema must match the number of attributesdefined in attribute definitions
Here goes the configuration create table dynamo db
#app.route('/create_table')
def create_table():
app.config['DYNAMO_TABLES'] = [
{
'TableName': "user_detail",
'KeySchema': [
{'AttributeName': "timestamp", 'KeyType': "HASH"},
{'AttributeName': "question", 'KeyType': "RANGE"},
],
'AttributeDefinitions': [
{'AttributeName': "timestamp", 'AttributeType': "S"},
{'AttributeName': "question", 'AttributeType': "N"},
{'AttributeName': "user", 'AttributeType': "N"},
],
'ProvisionedThroughput': {
'ReadCapacityUnits': 40,
'WriteCapacityUnits': 40
}
}]
dynamo = Dynamo(app)
with app.app_context():
dynamo.create_all()
return "Table created"
Thanks in advance
You need to remove the following line:
{'AttributeName': "user", 'AttributeType': "N"},
With DynamoDB (as with most NoSQL databases) you don't need to specify every record attribute field ahead of time. You only need to specify the hash and range fields ahead of time.
I'm trying to implement a cursor-based pagination using DynamoDB (definitely not easy to do pagination in DynamoDB...) using a query request using ExclusiveStartKey.
My table index is made of an "id", while I have a GSI on "owner" (partition key) and "created_at" (range key).
I can easily retrieve the 10 first records using a query request, by specifying the GSI index and using the "owner" property.
However, on next requests, the ExclusiveStartKey only works if I specify the THREE elements from both indices (so "id", "owner" AND "created_at").
While I understand for "id", and "owner" as those are both partitioned key and are needed to "locate" the record, I don't see why DynamoDb requires me to specify the "created_at". This is annoying because this means that the consumer must not only submit the "id" as cursor, but also the "created_at".
As DynamoDb could find the record using the "id" (which is guarantees unique), why do I need to specify this created_at?
Thanks
GSI primary keys are not necessarily unique. Base table keys are necessary to answer the question, "For a given owner and creation date, up to which id did I read in this page?". Put another way, you could have multiple items with the same owner and creation date.
In my testing, querying a gsi on a table resulted in a last evaluated key with all the item properties (essentially gsi key + table key). I needed to add all elements of the last evaluated key to the next request as exclusive start key to get the next page. If I excluded any elements of the last evaluated key in the next request, I received an exclusive start key error.
The following request:
aws dynamodb query --table-name MyTable --index-name MyIndex --key-condition-expression "R = :type" --expression-attribute-values '{\":type\":{\"S\":\"Blah\"}}' --exclusive-start-key '{\"I\":{\"S\":\"9999\"},\"R\":{\"S\":\"Blah\"},\"S\":{\"S\":\"Bluh_999\"},\"P\":{\"S\":\"Blah_9999~Sth\"}}' --limit 1
Resulted in the following response:
{
"Items": [
{
"I": {
"S": "9999"
},
"R": {
"S": "Blah"
},
"S": {
"S": "Bluh_999"
},
"P": {
"S": "Blah_9999~Sth"
}
}
],
"Count": 1,
"ScannedCount": 1,
"LastEvaluatedKey": {
"I": {
"S": "9999"
},
"R": {
"S": "Blah"
},
"S": {
"S": "Bluh_999"
},
"P": {
"S": "Blah_9999~Sth"
}
}
}
If I left off some elements of the last evaluated key, for example (same request as above minus the table partition/sort keys):
aws dynamodb query --table-name MyTable --index-name MyIndex --key-condition-expression "R = :type" --expression-attribute-values '{\":type\":{\"S\":\"Blah\"}}' --exclusive-start-key '{\"I\":{\"S\":\"9999\"},\"R\":{\"S\":\"Blah\"}}' --limit 1
I get the following error:
An error occurred (ValidationException) when calling the Query operation: The provided starting key is invalid