I have a table called followProduct in Dynamodb and it has following strucure
id - item id
email - user email
product - product id
Whenever a user follows a product I am making an entry in the table. I am trying to stop duplicate entry and using the following code
let params = {
TableName: "followProduct",
ConditionExpression: "email <> :email AND product <> :pid",
Item: {
email: "a#a.com",
product: req.body.productId,
id: shortid.generate()
},
ExpressionAttributeValues: {
':email': "a#a.com",
":pid": req.body.productId
}
};
createItemInDDB(params).then(() => {
res.status(200).send("Company Added");
}, err => {
console.log(err);
res.sendStatus(500);
});
CreateItemInDDB is just a function that takes params as input and run put function provided by document client. This params is still making a duplicate entry. I want that for every email each product id should be entered only once.
can you describe your table hash-range keys?
Dynamodb can force uniqueness only for hash-range table keys (not for global secondary index keys)
from http://docs.aws.amazon.com/amazondynamodb/latest/APIReference/API_PutItem.html
To prevent a new item from replacing an existing item, use a conditional expression that contains the attribute_not_exists function with the name of the attribute being used as the partition key for the table. Since every record must contain that attribute, the attribute_not_exists function will only succeed if no matching item exists.
and http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Expressions.ConditionExpressions.html:
The PutItem operation will overwrite an item with the same key (if it exists). If you want to avoid this, use a condition expression. This will allows the write to proceed only if the item in question does not already have the same key:
Related
I am using Sequelize in my node js server. I am ending up with validation errors because my code tries to write the record twice instead of creating it once and then updating it since it's already in DB (Postgresql).
This is the flow I use when the request runs:
const latitude = req.body.latitude;
var metrics = await models.user_car_metrics.findOne({ where: { user_id: userId, car_id: carId } })
if (metrics) {
metrics.latitude = latitude;
.....
} else {
metrics = models.user_car_metrics.build({
user_id: userId,
car_id: carId,
latitude: latitude
....
});
}
var savedMetrics = await metrics();
return res.status(201).json(savedMetrics);
At times, if the client calls the endpoint very fast twice or more the endpoint above tries to save two new rows in user_car_metrics, with the same user_id and car_id, both FK on tables user and car.
I have a constraint:
ALTER TABLE user_car_metrics DROP CONSTRAINT IF EXISTS user_id_car_id_unique, ADD CONSTRAINT user_id_car_id_unique UNIQUE (car_id, user_id);
Point is, there can only be one entry for a given user_id and car_id pair.
Because of that, I started seeing validation issues and after looking into it and adding logs I realize the code above adds duplicates in the table (without the constraint). If the constraint is there, I get validation errors when the code above tries to insert the duplicate record.
Question is, how do I avoid this problem? How do I structure the code so that it won't try to create duplicate records. Is there a way to serialize this?
If you have a unique constraint then you can use upsert to either insert or update the record depending on whether you have a record with the same primary key value or column values that are in the unique constraint.
await models.user_car_metrics.upsert({
user_id: userId,
car_id: carId,
latitude: latitude
....
})
See upsert
PostgreSQL - Implemented with ON CONFLICT DO UPDATE. If update data contains PK field, then PK is selected as the default conflict key. Otherwise, first unique constraint/index will be selected, which can satisfy conflict key requirements.
I am trying to delete and update records in cosmosDB using my graphql/nodejs code and getting error - "Entity with the specified id does not exist in the system". Here is my code
deleteRecord: async (root, id) => {
const { resource: result } = await container.item(id.id, key).delete();
console.log(`Deleted item with id: ${id}`);
},
Somehow below code is not able to find record, even "container.item(id.id, key).read()" doesn't work.
await container.item(id.id, key)
But if I try to find record using query spec it works
await container.items.query('SELECT * from c where c.id = "'+id+'"' ).fetchNext()
FYI- I am able to fetch all records and create new item, so Connection to DB and reading/writing is not an issue.
What else can it be? Any pointer related to this will be helpful.
Thanks in advance.
It seems you pass the wrong key to item(id,key). According to the Note of this documentation:
In both the "update" and "delete" methods, the item has to be selected
from the database by calling container.item(). The two parameters
passed in are the id of the item and the item's partition key. In this
case, the parition key is the value of the "category" field.
So you need to pass the value of your partition key, not your partition key path.
For example, if you have document like below, and your partition key is '/category', you need to use this code await container.item("xxxxxx", "movie").
{
"id":"xxxxxx",
"category":"movie"
}
I am trying to query dynamodb using the following code:
const AWS = require('aws-sdk');
let dynamo = new AWS.DynamoDB.DocumentClient({
service: new AWS.DynamoDB(
{
apiVersion: "2012-08-10",
region: "us-east-1"
}),
convertEmptyValues: true
});
dynamo.query({
TableName: "Jobs",
KeyConditionExpression: 'sstatus = :st',
ExpressionAttributeValues: {
':st': 'processing'
}
}, (err, resp) => {
console.log(err, resp);
});
When I run this, I get an error saying:
ValidationException: Query condition missed key schema element: id
I do not understand this. I have defined id as the partition key for the jobs table and need to find all the jobs that are in processing status.
You're trying to run a query using a condition that does not include the primary key. This is how queries work in DynamoDB. You would need to do a scan for the info in your case, however, I don't think that is the best option.
I think you want to set up a global secondary index and use that to query for the processing status.
In another answer #smcstewart responded to this question. But he provides a link instead of commenting why this error occurs. I want to add a brief comment hoping it will save your time.
AWS docs on Querying a Table states that you can do WHERE condition queries (e.g. SQL query SELECT * FROM Music WHERE Artist='No One You Know') in the DynamoDB way, but with one important caveat:
You MUST specify an EQUALITY condition for the PARTITION key, and you can optionally provide another condition for the SORT key.
Meaning you can only use key attributes with Query. Doing it in any other way would mean that DynamoDB would run a full scan for you which is NOT efficient - less efficient than using Global secondary indexes.
So if you need to query on non-key attributes using Query is usually NOT an option - best option is using Global Secondary Indexes as suggested by #smcstewart.
I found this guide to be useful to create a Global secondary index manually.
If you need to add it using CloudFormation here is a relevant page.
I was getting this error for a different scenario. Here is my scenario.
(It's very unlikely that anyone else ends up with this case, but incase)
I had a query working on a Table (say table A). Table A had a partition key m_id and sort key u_id.
I had a query to fetch data using m_id. The query was working.
'''
var queryParams = {
ExpressionAttributeValues: {
':m_id': mId
},
KeyConditionExpression: 'm_id = :m_id',
TableName: "A"
};
let connections = await docClient.query(queryParams).promise();
'''
I created another Table say Table B. I made some errors in naming keys so I simply deleted and created a table with the same name again, Table B. Table B had partition key m_id, and sort key s_id.
I copied pasted the same query which I was using for Table A, I changed Table name only because partition key had the same name.
To my shock, I get this expectation.
"ValidationException: Query condition missed key schema element"
I rechecked all the names, I compared the query with the working query. Everything was fine.
I thought maybe because, I was deleting recreating Table B, it could be something with that. So I create a fresh Table with a new Name Table B2 with the same key names as Table B.
In my query that was throwing exceptions, I changed only the Table name from B to B2.
And the Exception was gone.
If you are getting this on a fresh table, where no query has worked earlier, creating a new Table with a new name is an option.
If you delete a Table only to change partition key names, it may be safer to use a new name for Table as well (Dynamo could be referring metadata by table names and not by internal identifiers, it is possible that old metadata stays even if you delete a table. Just a guess given I faced this case).
EDIT:2022-July-12
This error does not leave me. My own answer was helpful but one more case, there was a trailing space in name of Key in the table. And Dynamo does not even check for spaces in key names.
You have to create an global secondary index for the status field.
Then, you code could look like smth like this:
dynamo.query({
TableName: "Jobs",
IndexName: 'status',
KeyConditionExpression: '#s = :st',
ExpressionAttributeValues: {
':st': 'processing'
},
ExpressionAttributeNames: {
'#s': 'status',
},
}, (err, resp) => {
console.log(err, resp);
});
Note: scan operation is indeed very costly, especially if you table is huge in size
i solved the problem using AWS.DynamoDB.DocumentClient() with scan, for sample (nodejs):
var docClient = new AWS.DynamoDB.DocumentClient();
var params = {
TableName: "product",
FilterExpression: "#cg = :data",
ExpressionAttributeNames: {
"#cg": "categoria",
},
ExpressionAttributeValues: {
":data": category,
}
};
docClient.scan(params, onScan);
function onScan(err, data) {
if (err) {
// for the log in server
console.error("Unable to scan the table. Error JSON:", JSON.stringify(err, null, 2));
res.json(err);
} else {
console.log("Scan succeeded.");
res.json(data);
}
}
I am trying to conditionally write a document to dynamodb, with the intention of "marking" the specific event as "already happened" by checking if the item exists and if not writing it.
So basically I want to check for the existence of a combination of attributes, and if there is no item with those attribute combination, write it.
I have a dynamodb instance, with a lambda function setup to write an event to it. The table name is events and the primary key is mapId.
I have the following code:
...
var params = {
Item: {
event: query.event,
mapId: query.mapId
},
TableName: TABLE_NAME
};
params.Item.uniqueKey = query.uniqueKey;
params.ExpressionAttributeValues = {
":mapId": query.mapId,
":uniqueKey": query.uniqueKey
};
params.ConditionExpression = "mapId <> :mapId and uniqueKey <> :uniqueKey";
docClient.put(params, function(err, data) {
...
I am expecting this code to succeed the first time, but fail the second time because there's already an item with the combination of attributes, however, any call to this put request adds another item with the same attributes.
I'm sure there's a way to properly do that, just haven't figured out how. Help???
How can I add a new row with the update operation
I am using following code
statuscollection.update({
id: record.id
}, {
id: record.id,
ip: value
}, {
upsert: true
}, function (err, result) {
console.log(err);
if (!err) {
return context.sendJson([], 404);
}
});
While calling this first one i will add the row
id: record.id
Then id:value then i have to add id:ggh
How can i add every new row by calling this function for each document I need to insert
By the structure of your code you are probably missing a few concepts.
You are using update in a case where you probably do not need to.
You seem to be providing an id field when the primary key for MongoDB would be _id. If that is what you mean.
If you are intending to add a new document on every call then you probably should be using insert. Your use of update with upsert has an intended usage of matching a document with the query criteria, if the document exists update the fields as specified, if not then insert a new document with the fields specified.
Unless that actually is your goal then insert is most certainly what you need. In that case you are likely to rely on the value of _id being populated automatically or by supplying your own unique value yourself. Unless you specifically want another field as an identifier that is not unique then you will likely want to be using the _id field as described before.