ValidationException: The table does not have the specified index: GameTitleIndex - node.js

I am using Vogels, DynamoDB data mapper for NodeJS. I am querying against a global index according to the Vogels' documentation. All I have done is, created a model with a global secondary index like this:
let MyModel = vogels.define('MyModel', {
hashKey: 'uuid',
timestamps: true,
updatedAt: 'updated_at',
createdAt: 'created_at',
schema: MyModelBaseSchema,
indexes : [{
hashKey : 'gameTitle', rangeKey : 'topScore', name : 'GameTitleIndex', type : 'global'
}]
});
and querying against this index
MyModel.query('game 1')
.usingIndex('GameTitleIndex')
.loadAll()
.select("COUNT");
When running any tests it shows an exception ValidationException: The table does not have the specified index: GameTitleIndex
According to the documentation, this is all I have to do to get data. Is there anything which I have missed to query this index?
Any answers will be appreciated.
Thanks in advance.

– In case you are using Serverless framework with serverless-dynamodb-local and serverless-offline plugins (for local testing).
– And in case you are created new Local Secondary Index, but the command shown below still not showing it under LocalSecondaryIndexes configuration node:
aws dynamodb describe-table --table-name YOUT_TABLE_NAME --endpoint-url http://localhost:8000
– And in case you are receiving error ValidationException: The table does not have the specified index: YOUR_INDEX_NAME when using code similar to shown below:
query(uid, id) {
const params = {
TableName: YOUR_TABLE_NAME,
IndexName: YOUR_LSI_NAME,
KeyConditionExpression: "#uid = :uid and #id = :id",
ExpressionAttributeNames: {
"#uid": "uid",
"#id": "id",
},
ExpressionAttributeValues: { ":uid": uid, ":id": id },
};
return DB.query(params).promise();
}
Then most likely you need to delete local DynamoDB database file and restart services with serverless offline start command. This will force recreation of local DynamoDB database file with proper Local Secondary Indexes.
To delete local DynamoDB database file do the following:
Update the serverless.yml by specifying path to DynamoDB database, like shown below:
custom:
dynamodb:
stages:
- ${self:provider.stage}
start:
port: 8000
inMemory: true
migrate: true
dbPath: "./.db" # Make sure that folder exists!
Now you can go to the folder ./.db and remove file shared-local-instance.db (which is a SQLite database which mimics DynamoDB :0).
As result on next start with the command serverless offline start the up to date local DynamoDB database file will be created.

In case someone's stuck at the same issue, here is the answer:
After creating a new index in model, either local secondary index or global secondary index, Migrations are to be run. Only after then the table will have the specified index. Refer to this issue for more clarification.

Related

One to many relation in Dynamodb Node js (Dynamoose)

I am using Dynamodb with nodejs for my reservation system. And Dynamoose as ORM. I have two tables i.e Table and Reservation. To create relation between them, I have added tableId attribute in Reservation which is of type Model type (of type Table type), as mentioned in the dyanmoose docs. Using the document.populate I am able to get the Table data through the tableId attribute from Reservation table. But how can I retrieve all Reservation for a Table? (Reservation and Table has one to many relation)?
These are my Model:
Table Model:
const tableSchema = new Schema ({
tableId: {
type: String,
required: true,
unique: true,
hashKey: true
},
name: {
type: String,
default: null
},
});
*Reservation Model:*
const reservationSchema = new Schema ({
id: {
type: Number,
required: true,
unique: true,
hashKey: true
},
tableId: table, \\as per doc attribute of Table (Model) type
date: {
type: String
}
});
This is how I retrieve table data from reservation model
reservationModel.scan().exec()
.then(posts => {
return posts.populate({
path: 'tableId',
model: 'Space'
});
})
.then(populatedPosts => {
console.log('pp',populatedPosts);
return {
allData: {
message: "Executedddd succesfully",
data: populatedPosts
}
}
})
Anyone please help to retrieve all Reservation data from Table??
As of v2.8.2, Dynamoose does not support this. Dynamoose is focused on one directional simple relationships. This is partly due to the fact that we discourage use of model.populate. It is important to note that model.populate does another completely separate request to DynamoDB. This increases the latency and decreases the performance of your application.
DynamoDB truly requires a shift in how you think about modeling your data compared to SQL. I recommend watching AWS re:Invent 2019: Data modeling with Amazon DynamoDB (CMY304) for a great explanation of how you can model your data in DynamoDB in a highly efficient manner.
At some point Dynamoose might add support for this, but it's really hard to say if we will.
If you truly want to do this, I'd recommend adding a global index to your tableId property in your reservation schema. Then you can run something like the following:
async function code(id) {
const reservation = await reservationModel.get(id);
const tables = await tableModel.query("tableId").eq(id).exec(); // This will be an array of `table` entries where `"tableId"=id`. Remember, it is required you add an index for this to work.
}
Remember, this will cause multiple calls to DynamoDB and isn't as efficient. I'd highly recommend watching the video I linked above to get more information about how to model your data in an more efficient manner.
Finally, I'd like to point out that your unique: true code does nothing. As seen in the Dynamoose Attribute Settings Documentation, unique is not a valid setting. In your case since you don't have a rangeKey, it's not possible for two items to have the same hashKey, so technically it's already a unique property based on that. However it is important to note that you can overwrite existing items when creating an item. You can set overwrite to false for document.save or Model.create to prevent that behavior and throw an error instead of overwriting your document.

Mongoose with CosmosDB: Getting error `Shared throughput collection should have a partition key`

I have a node-express application that currently uses Mongoose to connect to MongoDB, and am attempting to migrate it to Azure Cosmos DB.
When I simply allow Mongoose to create the database, the application works fine, however the database is created with individual collection RU pricing.
If I create a new database with Shared throughput enabled and attempt to use that, I get the error Shared throughput collection should have a partition key
I have tried updating the collection schema to include a shard key like this:
const mongoose = require('mongoose');
module.exports = function() {
const types = mongoose.Schema.Types;
const messages = new mongoose.Schema({
order: { type: types.ObjectId, required: true, ref: 'orders' },
createdAt: { type: Date, default: Date.now },
sender: { type: types.ObjectId, required: true, ref: 'users' },
recipient: { type: types.ObjectId, ref: 'users' },
text: { type: String, required: true },
seen: { type: Boolean, default: false },
}, { shardKey: { order: 1 } });
return mongoose.model('messages', messages);
};
However this does not work.
Any ideas on how to create/use a partition key? Alternatively the database is small, so if its possible to remove the requirement for the partition key that would also be fine.
Now I don't have an exact answer for this question so no need to accept this unless you feel it's correct.
The best solution I've found so far is that this is due to "Provision Throughput" being checked when the database is created in the Azure console. If you delete and recreate the database with this box not checked (it's right below the input for the database name) then you should no longer encounter this error.
You specify it when you're creating a collection in the DB that you've opted in for Shared Throughput.
Collection vs Database
If you're using individual collection pricing, you can set the throughput on the individual collections. If you're using the lesser pricing option, you'd get the shared throughput (at the database level) which is less granular but less expensive.
Details here: https://azure.microsoft.com/en-us/blog/sharing-provisioned-throughput-across-multiple-containers-in-azure-cosmosdb/
Partition keys
If you're using the Shared throughput, you'll need a partition Id for the collection that you're adding.
So - create a DB with Shared throughput (check the checkbox below)
After that when you're attempting to add a new document you should be able to create a partition key.
I have yet another not-quite-complete answer for you. It seems like, yes, it is required to use partitioned collections if you are using the shared/db-level throughput model in Cosmos. But it turns out it is possible to create a CosmosDb collection with a partition key using only the MongoDb wire protocol (meaning no dependency on an Azure SDK, and no need to pre-create every collection via the Azure Portal).
The only remaining catch is, I don't think it's possible to run this command via Mongoose, it will probably have to be run directly via the MongoDb Node.js Driver, but at least it can still be run from code.
From a MongoDb Shell:
db.runCommand({shardCollection: "myDbName.nameOfCollectionToCreate",
key: {nameOfDesiredPartitionKey: "hashed"}})
This command is meant to set the sharding key for a collection and start sharding the collection, but in CosmosDb it works to create the collection with the desired partitionKey already set.
I have an even little more complete answer. You actually can do it with mongoose. I usually do it like this in an Azure Function:
mongoose.connect(process.env.COSMOSDB_CONNSTR, {
useUnifiedTopology: true,
useNewUrlParser: true,
auth: {
user: process.env.COSMODDB_USER,
password: process.env.COSMOSDB_PASSWORD,
},
})
.then(() => {
mongoose.connection.db.admin().command({
shardCollection: "mydb.mycollection",
key: { _id: "hashed" }
})
console.log('Connection to CosmosDB successful 🚀')
})
.catch((err) => console.error(err))

Query condition missed key schema element : Validation Error

I am trying to query dynamodb using the following code:
const AWS = require('aws-sdk');
let dynamo = new AWS.DynamoDB.DocumentClient({
service: new AWS.DynamoDB(
{
apiVersion: "2012-08-10",
region: "us-east-1"
}),
convertEmptyValues: true
});
dynamo.query({
TableName: "Jobs",
KeyConditionExpression: 'sstatus = :st',
ExpressionAttributeValues: {
':st': 'processing'
}
}, (err, resp) => {
console.log(err, resp);
});
When I run this, I get an error saying:
ValidationException: Query condition missed key schema element: id
I do not understand this. I have defined id as the partition key for the jobs table and need to find all the jobs that are in processing status.
You're trying to run a query using a condition that does not include the primary key. This is how queries work in DynamoDB. You would need to do a scan for the info in your case, however, I don't think that is the best option.
I think you want to set up a global secondary index and use that to query for the processing status.
In another answer #smcstewart responded to this question. But he provides a link instead of commenting why this error occurs. I want to add a brief comment hoping it will save your time.
AWS docs on Querying a Table states that you can do WHERE condition queries (e.g. SQL query SELECT * FROM Music WHERE Artist='No One You Know') in the DynamoDB way, but with one important caveat:
You MUST specify an EQUALITY condition for the PARTITION key, and you can optionally provide another condition for the SORT key.
Meaning you can only use key attributes with Query. Doing it in any other way would mean that DynamoDB would run a full scan for you which is NOT efficient - less efficient than using Global secondary indexes.
So if you need to query on non-key attributes using Query is usually NOT an option - best option is using Global Secondary Indexes as suggested by #smcstewart.
I found this guide to be useful to create a Global secondary index manually.
If you need to add it using CloudFormation here is a relevant page.
I was getting this error for a different scenario. Here is my scenario.
(It's very unlikely that anyone else ends up with this case, but incase)
I had a query working on a Table (say table A). Table A had a partition key m_id and sort key u_id.
I had a query to fetch data using m_id. The query was working.
'''
var queryParams = {
ExpressionAttributeValues: {
':m_id': mId
},
KeyConditionExpression: 'm_id = :m_id',
TableName: "A"
};
let connections = await docClient.query(queryParams).promise();
'''
I created another Table say Table B. I made some errors in naming keys so I simply deleted and created a table with the same name again, Table B. Table B had partition key m_id, and sort key s_id.
I copied pasted the same query which I was using for Table A, I changed Table name only because partition key had the same name.
To my shock, I get this expectation.
"ValidationException: Query condition missed key schema element"
I rechecked all the names, I compared the query with the working query. Everything was fine.
I thought maybe because, I was deleting recreating Table B, it could be something with that. So I create a fresh Table with a new Name Table B2 with the same key names as Table B.
In my query that was throwing exceptions, I changed only the Table name from B to B2.
And the Exception was gone.
If you are getting this on a fresh table, where no query has worked earlier, creating a new Table with a new name is an option.
If you delete a Table only to change partition key names, it may be safer to use a new name for Table as well (Dynamo could be referring metadata by table names and not by internal identifiers, it is possible that old metadata stays even if you delete a table. Just a guess given I faced this case).
EDIT:2022-July-12
This error does not leave me. My own answer was helpful but one more case, there was a trailing space in name of Key in the table. And Dynamo does not even check for spaces in key names.
You have to create an global secondary index for the status field.
Then, you code could look like smth like this:
dynamo.query({
TableName: "Jobs",
IndexName: 'status',
KeyConditionExpression: '#s = :st',
ExpressionAttributeValues: {
':st': 'processing'
},
ExpressionAttributeNames: {
'#s': 'status',
},
}, (err, resp) => {
console.log(err, resp);
});
Note: scan operation is indeed very costly, especially if you table is huge in size
i solved the problem using AWS.DynamoDB.DocumentClient() with scan, for sample (nodejs):
var docClient = new AWS.DynamoDB.DocumentClient();
var params = {
TableName: "product",
FilterExpression: "#cg = :data",
ExpressionAttributeNames: {
"#cg": "categoria",
},
ExpressionAttributeValues: {
":data": category,
}
};
docClient.scan(params, onScan);
function onScan(err, data) {
if (err) {
// for the log in server
console.error("Unable to scan the table. Error JSON:", JSON.stringify(err, null, 2));
res.json(err);
} else {
console.log("Scan succeeded.");
res.json(data);
}
}

How to create a UNIQUE constraint on a JSONB field with Sequelize

I'm using the Sequelize ORM in NodeJS to manage a postgreSQL database.
I'm using the JSONB datatype in my table, I need an index on the JSONB field and an unique constraint on a property of this JSON.
If I have to do in a classic SQL here my script :
CREATE TABLE tableJson (id SERIAL PRIMARY KEY,
content JSONB NOT NULL);
CREATE INDEX j_idx ON tableJson USING gin(content jsonb_path_ops);
CREATE UNIQUE INDEX content_name_idx ON tableJson(((content->>'name')::varchar));
I've found how to create the table with the INDEX but not how to deal with the UNIQUE constraint. Here is a sample of my script :
var tableJson = sequelize.define('tableJson', {
content: Sequelize.JSONB
}, {
indexes: [{
fields: ['content'],
using: 'gin',
operator: 'jsonb_path_ops'
}
});
Is there a solution for my problem? If not I'll probably use the sequelize.query method to execute raw query but this is not very evolutive.
Any help would be appreciated!
This is the workaround I use to add indexes on JSONB fields with Postgres and Sequelize.
First, set the quoteIdentifiers option to false in your Sequelize client.
const seq = new Sequelize(db, user, pass, {
host: host,
dialect: 'postgres',
quoteIdentifiers: false,
});
(Watch out though, this will make all your table names case insensitive when created by Sequelize.sync())
You can now add indexes on JSONB fields this way in your model definition :
indexes: [
{
fields: ['content'],
using: 'gin',
operator: 'jsonb_path_ops'
},
{
fields: ['((content->>\'name\')::varchar)'],
unique: true,
name: 'content_name_idx',
}
],

sequelize for Node.js : ER_NO_SUCH_TABLE

I'm new to sequelize and Node.js.
I coded for test sequelize, but error occured "ER_NO_SUCH_TABLE : Table 'db.node_tests' doesn't exist"
Error is very simple.
However, I want to get data from "node_test" table.
I think sequelize appends 's' character.
There is my source code.
var Sequelize = require('sequelize');
var sequelize = new Sequelize('db', 'user', 'pass');
var nodeTest = sequelize.define('node_test',
{ uid: Sequelize.INTEGER
, val: Sequelize.STRING} );
nodeTest.find({where:{uid:'1'}})
.success(function(tbl){
console.log(tbl);
});
I already create table "node_test", and inserted data using mysql client.
Does I misunderstood usage?
I found the answer my own question.
I appended Sequelize method option following. {define:{freezeTableName:true}}
Then sequelize not appends 's' character after table name.
Though the answer works nicely, I nowadays recommend the use of the tableName option when declaring the model:
sequelize.define('node_test', {
uid: Sequelize.INTEGER,
val: Sequelize.STRING
}, {
tableName: 'node_test'
});
http://docs.sequelizejs.com/manual/tutorial/models-definition.html
Sequelize is using by default the plural of the passed model name. So it will look for the table "node_tests" or "NodeTests". Also it can create the table for you if you want that.
nodeTest.sync().success(function() {
// here comes your find command.
})
Sync will try to create the table if it does not already exist. You can also drop the existing table and create a new one from scratch by using sync({ force: true }). Check the SQL commands on your command line for more details about what is going on.
When you define a model to an existing table, you need to set two options for sequelize to:
find your table name as-is and
not fret about sequelize's default columns updatedAt and createdAt that it expects.
Simply add both options like so:
var nodeTest = sequelize.define('node_test',
{ uid: Sequelize.INTEGER , val: Sequelize.STRING},
{ freezeTableName: true , timestamps: false} //add both options here
);
Note the options parameter:
sequelize.define('name_of_your_table',
{attributes_of_your_table_columns},
{options}
);
Missing either options triggers respective errors when using sequelize methods such as nodeTest.findAll().
> ER_NO_SUCH_TABLE //freezeTableName
> ER_BAD_FIELD_ERROR //timestamps
Alternatively, you can:
create a fresh table through sequelize. It will append "s" to the table name and create two timestamp columns as defaults or
use sequelize-auto, an awesome npm package to generate sequelize models from your existing database programmatically.
Here's the sequelize documentation for option configurations.
In my case, it was due to case. I was having:
sequelize.define('User', {
The correct way is to use lowercase:
sequelize.define('user', {

Resources