How to use batchWriteItem on Appsync with Resolver - amazon

I have a problem with Appsync, Dynamodb on Amazon
So I want to delete multi rows on table. I used batchWriteItem but it showed "operation unsuportted it" on Resolver
So how to use batchWriteItem for delete multi rows.
Thanks.

Batch operations with DynamoDB resolvers in AWS AppSync are:
BatchPutItem
BatchGetItem
BatchDeleteItem
This will also require a "tables" key in your resolver configuration where you can perform these operations against one or multiple tables in a single GraphQL query or mutation.
With a DynamoDB data source you'll also find these listed in the sample templates in the console.
You can find an in-depth tutorial here: https://docs.aws.amazon.com/appsync/latest/devguide/tutorial-dynamodb-batch.html

Related

How use dynamoose in aws lambda function?

Actually, I am new to AWS Lambda functions and Amazon DynamoDB,
so I could not find a way to create a table with a Dynamoose schema and create CRUD operation with it.
I am familiar with mongoose and I read that Dynamoose is inspired by it.
So, can anybody tell me how to create CRUD operations in Lambda by using Dynamoose schema?
In the same way, you would create the CRUD operations without the lambda. The difference is that you create one crud operation per lambda function/API endpoint and that you need to create an appropriate lambda execution role. Also, you will need to bundle the dependencies with the function.

Mongodb or mongoose fetch all records from collection A into other aggregate query which is fetching B collection data

Writing REST API using Nodejs and database is mongodb 3.6.
Collections Names : Subscription, Users and Offering
I am using aggregate function to fetch data from subscription and using lookup I am fetching user which has subscribed.
What I want is in the same output (previous line) I also want to list all the records from offering collection as array.
How do I can get it.
Thanks in advance.
I don't think you can, and neither you should. Getting all records of some collection is bad practice, always try to limit yourself with only things you need.
If you really want to add resutls from some totally unrelated collection then you should make separate request and then add them together in json you sending to client.

Cosmos db REST API order by with partitioning

I'm using the REST API with SQL for Cosmos db and need to return a query's results ordered by a timestamp (which is stored as UNIX numeric timestamp). I'm trying to do this with a simple ORDER BY.
e.g. SELECT * FROM requests c ORDER BY c.timestamp
However with partitioning I get the error:
"Cross partition query with TOP/ORDER BY or aggregate functions is not
supported."
In the collection settings the indexing precision for strings is set to -1, which was a suggestion from elsewhere, however the error is still thrown.
If I remove or set to false the x-ms-documentdb-query-enablecrosspartition header then I get:
"Cross partition query is required but disabled. Please set
x-ms-documentdb-query-enablecrosspartition to true, specify
x-ms-documentdb-partitionkey, or revise your query to avoid this
exception."
Has anyone had any success doing this via the SQL REST API?
Thanks.
I reproduce your issue on my side.
However , based on this official statement, java sdk and node.js sdk are support for TOP and ORDER BY queries on partitioned collections. I test the same query via sdk and it works.
Update Answer:
I used the Flidder tool to observe the requests from the SDK, and I found three requests included.
One:
When I run the above sdk code, the first request as below and the exactly the same error that rest received. However , sdk will do retry for me to get the _rid property of partition.
Two:
I did not find any clear official explanation about this.However , after reading this article, I suppose the cross partition here refers to physical partitions , not logical partitions. So the request help us to get the "rid" of the physical partition which your data stored in and PartitionKeyRanges.
Three:
Then sdk send update request for me with additional property :x-ms-documentdb-partitionkeyrangeid. And query results returned correctly. Please notice the update sql in the last request.
I think you could emulate the requests SDK did for us to fulfill your needs.

Bulk Import using nodeJS with Azure DocumentDB. Partition Key issue

I'm attempting to bulk import a lot of data into DocumentDB using Node. I've created the recommended stored procedure (the one that has been posted numerous times here and on MSDN). However, when I try executing it via Node, I get the following response: "Requests originating from scripts cannot reference partition keys other than the one for which client request was submitted."
The partitionKey on the Collection is "/id", and my request in Node looks like this:
client.executeStoredProcedure('dbs/blah/blah/sprocs/blah', [docs], {partitionKey: '/id'}, ...);
I can't find any concise documentation on this as it related to Node specifically. Has anyone encountered this issue and is there a solution? I'm completely open to the idea that I'm making a silly mistake, but this is my first rodeo, so to speak, with DocumentDB. Thank you all
I suspect that you have a data in the set you are trying to import with an /id other than the one you specify. If that's the case, I can think of two options:
Not use the sproc and load them one at a time from your node.js script. I recommend using one of the async.js functions or the equivalent from some other parallelization library so you can have many requests going in parallel.
You can separate the data client side by /id and then call the sproc for each set.

Does any cloud object stores support object metadata indices?

I have a very large document store - about 50 million JSON docs, with 50m more added per year. Each is about 10K. I would like to store them in a cloud storage and retrieve them via a couple structured metadata indices that I would update as I add documents to the store.
It looks like AWS S3, Google Cloud Storage and Azure allow custom metadata to be returned with an object, but not used as part of a GET request to filter a collection of objects.
Is there a good solution "out-of-the-box" for this? I can't find any, but it seems like my use case shouldn't be really unusual. I don't need to query by document attributes or to return partial documents, I just need to GET a collection of documents by filtering on a handful of metadata fields.
The AWS SimpleDB page mentions "Indexing Amazon S3 Object Metadata" as a use case, and links to a library that hasn't been updated since 2009.
They are simply saying that you can store and query the metadata in amazon simple DB which is a NoSQL database provided by amazon for you. Depending on the kind of metadata you have, you could also store it in an RDBMS. Few 100 million rows isn’t too much if you create the proper indices and you can store URLs or file names, to access the files stored on S3, Azure, … afterwards.

Resources