DynamoDB returns 200 statusCode for deleteItem but it still exists on the console - node.js

My table has a hash key userId (there is no range key).
I am calling the API as follows (from Node.js):
dynamo.deleteItem({
"TableName": 'my-table',
"Key": {
"userId": '4ada7bbd-a8ac-4d29-94c6-e199a50430c9'
}
}
I am calling this API and it is returning statusCode of 200 successful, but that item still exists on the DynamoDB console even if I refreshed with the refresh button.
How is this possible?

Please keep in mind that the DeleteItem operation succeeds even if you delete a non-existing item. In your case, probably an item with the key "4ada7bbd-a8ac-4d29-94c6-e199a50430c9" doesn't exist - maybe there is some typo in the name or something?
Try using GetItem to get the item instead of DeleteItem - then you'll be able to verify that the item that you think exists with this key, doesn't exist. Or, use GetItem after the DeleteItem to verify in that way that the item is gone after the delete. Don't mix code and UI in the same test because it's harder to know what you did wrong if you can't paste a stand-alone failing code.

Related

Why does my Azure Cosmos DB SQL API Container Refuse Multiple Items With Same Partition Key Value?

In Azure Cosmos DB (SQL API) I've created a container whose "partition key" is set to /part_key and I am now trying to create and edit data in Data Explorer.
I created an item that looks like this:
{
"id": "test_id",
"value": "val000",
"magicNumber": 32,
"part_key": "asdf"
}
I am now trying to create an item that looks like this:
{
"id": "frank",
"value": "val001",
"magicNumber": 33,
"part_key": "asdf"
}
Based on the documentation I believe that each item within a partition key needs a distinct id, which to me implies that multiple items can in fact share a partition key, which makes a lot of sense.
However, I get an error when I try to save this second item:
{"code":409,"body":{"code":"Conflict","message":"Entity with the specified id already exists in the system...
I see that if I change the value of part_key to something else (say asdf2), then I can save this new item.
Either my expectations about this functionality are wrong, or else I'm doing this wrong somehow. What is wrong here?
Your understanding is correct, It could happen if you are trying to instead a new document with id equal to id of the existing document. This is not allowed, so operation fails.
Before you insert the modified copy, you need to assign a new id to it. I tested the scenario and it looks fine. May be try to create a new document and check

Azure Search Service REST API Delete Error: "Document key cannot be missing or empty."

I am seeing some intermittent and odd behavior when trying to use the Azure Search Service REST API to delete a blob storage blob/document. It works, sometimes, and then other times I get this:
The request is invalid. Details: actions : 0: Document key cannot be
missing or empty.
Once I start getting this error, it's the same results when I try to delete any of the document/blobs stored in that index. I do have 'metadata_storage_path' listed as my index key (see below).
I have not been able to get the query to succeed again, or I would examine the differences in Fiddler.
I have also tried the following with no luck:
Resetting and re-running the associated search indexer.
Creating a new indexer & index against the same container and deleting from that.
Creating a new container, indexer, & index and deleting from that.
Any additional suggestions or thoughts?
Copy/paste error: "metadata_storage_name" should be "metadata_storage_path".
[Insert head-banging-on-wall emoji here.]
For those who are still searching for the solution...
Instead of id,
{
"value": [
{
"#search.action": "delete",
"id":"TDVRT0FPQXcxZGtTQUFBQUFBQUFBQT090fdf"
}
]
}
Use rid of your document to delete.
{
"value": [
{
"#search.action": "delete",
"rid":"TDVRT0FPQXcxZGtTQUFBQUFBQUFBQT090fdf"
}
]
}
Because while creating Search Index, you might have selected rid as your unique id column.
Note: We can delete a document only with Unique Id Columns.

Cloud Datastore Projection Query with filters with AppEngine NodeJS Standard

I am learning GCP, and have searched through the documentation. The Projection queries documentation states that they can be used with filters albeit with some limitations. As far as I understand I am not falling within the limitations, but still I cannot make it work.
What I want to do is a simple
SELECT property FROM kind WHERE enabled = TRUE
The properties are marked as indexed, I have also deployed an index.yaml. And my code is the following
const selectQuery = programTypeQuery
.select(entityNameProperty)
.filter('enabled',true);
When commenting the select line, the query works. When commenting the filter line, it also works, but when running both I get the following message in postman.
{
"code": 9,
"metadata": {
"_internal_repr": {}
},
"note": "Exception occurred in retry method that was not classified as transient"
}
My log just shows a 400 status error.
Any help will be appreciated
EDIT:
this is the full code. I have a parameter that indicates the language of the name. in the database I have nameEn and nameEs as properties, so I want to return only the name in the selected language. enabled is a boolean property that indicates if the product is active or not.
const Datastore = require('#google-cloud/datastore');
const datastore = Datastore();
const programTypeQuery = datastore.createQuery('programType')
entityNameProperty = 'name' + req.params.languageCode
const selectQuery = programTypeQuery
.select(entityNameProperty)
.filter('enabled',true);
selectQuery.runQuery()
.then((results) => {
res.json(results);
})
.catch(err => res.status(400).json(err));
From the details you provided it is hard to detect where this issue is being originated. Can you use Google APIs Explorer for Datastore API and try your query. I prepared the request body according to your descriptions, you can click here and execute it by just changing the projectId. By doing this you will receive a sucessful response or an error message with details, it might be easier to detect the root cause from here.
Most likely you are missing a composite index definition. You should be able to look at your GAE logs in stackdriver to see the error message returned from Cloud Datastore.
Since your property name is dynamic you won't be able to use a composite index effectively. You'll probably need to change your data model to something that doesn't use dynamic property names.

node-mongodb show update results

When I db.collection('example').update({"a":1},{"$set":{"b":2}},{multi:true},function(e,r){
I get r:
{
n:3,
nModified:3,
ok:1
}
This works, I can see If I look at my db that I have successfully updated 3 documents but where are my results?
Quoted from https://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html
callback is the callback to be run after the records are updated. Has three parameters, the first is an error object (if error occured), the second is the count of records that were modified, the third is an object with the status of the operation.
I've tried with 3 outputs in the callback but, then I just get null as a result
db.collection('example').update({"a":1},{"$set":{"b":2}},{multi:true},function(e,n,r){
My documents have been successfully updated but r is null!
I am expecting for this to return my updated documents
It doesn't look like this operation ever does, so how can I manullay return the documents that got changed?
You can use findAndModify to get the updated document in the result. It's callback has 2 parameters:
1- error
2- Updated document
I am not sure this would work for you, but check [documentation]: https://mongodb.github.io/node-mongodb-native/markdown-docs/insert.html#find-and-modify for more info.
To get the updated documents in the returned result, you'll need to use the db.collection.bulkWrite method instead.

CouchDB: bulk_docs returning incorrect status code

I'm working on syncing a PouchDB database (with Angular) with a CouchDB database.
When the replication is in progress, the code is issuing a POST request to do a bulk update to http://127.0.0.1:5984/testdb/_bulk_docs.
I have a validation rule on database to reject unauthorized writes, and it generates a forbidden error. Therefore, the server is responding with a JSON response as [{"id":"0951db944e729c981ad3964c22002d55","rev":"8-ccdcb52743cae43c5870113f09f2e25a","error":"forbidden","reason":"Not Authorized"}]
According to the docs (at the end of the page), the above response should generate a 417 Expectation Failed status code. However, it currently generates a 201 Created status code.
Because of the incorrect response code, the client (PouchDB) shows as all records synced, but the updates are not written to the server (CouchDB).
Is there a config option to change this status code?
Fore reference, my validate_doc_update function is as following.
function(newDoc, oldDoc, userCtx){
if (!userCtx) throw({forbidden: 'Need a user to update'});
if((userCtx.roles.indexOf('_admin') == -1) && (userCtx.roles.indexOf('backend:manager') == -1)){
throw({forbidden: "Not Authorized"});
}
}
The 417:expectation failed status code only works when the all_or_nothing parameter is set to true. By default this parameter is false.
The default bulk update transaction mode in couchdb is non atomic which guarantees that only some of the documents will be saved. If the document is not saved the api returns an error object like you got along with a list of documents that were in fact successfully saved. So 201 seems to be the correct response.
Then you've got to walk through the response to find which documents failed and manually update them.
In case of all_or_nothing mode however a success will be returned only if all the documents have been updated.
While syncing you can also use the _replicate endpoint that has many other features that bulk update does not have.

Resources