I have the following schema where I am basically just trying to have a table with id as primary key, and both code and secondCode to be global secondary indexes to use to query the table.
resource "aws_dynamodb_table" "myDb" {
name = "myTable"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
attribute {
name = "code"
type = "S"
}
attribute {
name = "secondCode"
type = "S"
}
global_secondary_index {
name = "code-index"
hash_key = "code"
projection_type = "ALL"
}
global_secondary_index {
name = "second_code-index"
hash_key = "secondCode"
projection_type = "ALL"
}
}
When I try to look for one item by code
const toGet = Object.assign(new Item(), {
code: 'code_456',
});
item = await dataMapper.get<Item>(toGet);
locally I get
ValidationException: The number of conditions on the keys is invalid
and on the deployed instance of the DB I get
The provided key element does not match the schema
I can see from the logs that the key is not being populated
Serverless: [AWS dynamodb 400 0.082s 0 retries] getItem({ TableName: 'myTable', Key: {} })
Here is the class configuration for Item
#table(getEnv('MY_TABLE'))
export class Item {
#hashKey({ type: 'String' })
id: string;
#attribute({
indexKeyConfigurations: { 'code-index': 'HASH' },
type: 'String',
})
code: string;
#attribute({
indexKeyConfigurations: { 'second_code-index': 'HASH' },
type: 'String',
})
secondCode: string;
#attribute({ memberType: embed(NestedItem) })
nestedItems?: Array<NestedItem>;
}
class NestedItem {
#attribute()
name: string;
#attribute()
price: number;
}
I am using https://github.com/awslabs/dynamodb-data-mapper-js
I looked at the repo you linked for the package, I think you need to use the .query(...) method with the indexName parameter to tell DynamoDB you want to use that secondary index. Usuallly in DynamoDB, get operations use the default keys (in your case, you'd use get for queries on id, and query for queries on indices).
Checking the docs, it's not very clear - if you look at the GetItem reference, you'll see there's nowhere to supply an index name to actually use the index, whereas the Query operation allows you to supply one. As for why you need to query this way, you can read this: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html
The issue you are facing is due to calling a GetItem on an index, which is not possible. A GetItem must target a single item and an index can contain multiple items with the same key (unlike the base table), for this reason you can only use multi-item APIs on an index which are Query and Scan.
Related
i'm having trouble with node & knex.js
I'm trying to build a mini blog, with posts & adding functionality to add multiple tags to post
I have a POST model with following properties:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT,
Second I have Tags model that is used for storing tags:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT
And I have many to many table: Post Tags that references post & tags:
id SERIAL PRIMARY KEY NOT NULL,
post_id INTEGER NOT NULL REFERENCES posts ON DELETE CASCADE,
tag_id INTEGER NOT NULL REFERENCES tags ON DELETE CASCADE
I have managed to insert tags, and create post with tags,
But when I want to fetch Post data with Tags attached to that post I'm having a trouble
Here is a problem:
const data = await knex.select('posts.name as postName', 'tags.name as tagName'
.from('posts')
.leftJoin('post_tags', 'posts.id', 'post_tags.post_id')
.leftJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
Following query returns this result:
[
{
postName: 'Post 1',
tagName: 'Youtube',
},
{
postName: 'Post 1',
tagName: 'Funny',
}
]
But I want the result to be formated & returned like this:
{
postName: 'Post 1',
tagName: ['Youtube', 'Funny'],
}
Is that even possible with query or do I have to manually format data ?
One way of doing this is to use some kind of aggregate function. If you're using PostgreSQL:
const data = await knex.select('posts.name as postName', knex.raw('ARRAY_AGG (tags.name) tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first();
->
{ postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
For MySQL:
const data = await knex.select('posts.name as postName', knex.raw('GROUP_CONCAT (tags.name) as tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first()
.then(res => Object.assign(res, { tags: res.tags.split(',')}))
There are no arrays in MySQL, and GROUP_CONCAT will just concat all tags into a string, so we need to split them manually.
->
RowDataPacket { postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
The result is correct as that is how SQL works - it returns rows of data. SQL has no concept of returning anything other than a table (think CSV data or Excel spreadsheet).
There are some interesting things you can do with SQL that can convert the tags to strings that you concatenate together but that is not really what you want. Either way you will need to add a post-processing step.
With your current query you can simply do something like this:
function formatter (result) {
let set = {};
result.forEach(row => {
if (set[row.postName] === undefined) {
set[row.postName] = row;
set[row.postName].tagName = [set[row.postName].tagName];
}
else {
set[row.postName].tagName.push(row.tagName);
}
});
return Object.values(set);
}
// ...
query.then(formatter);
This shouldn't be slow as you're only looping through the results once.
I define a schema like this:
const query = new GraphQLObjectType({
name: 'Query',
fields: {
quote: {
type: queryType,
args: {
id: { type: QueryID }
},
},
},
});
const schema = new GraphQLSchema({
query,
});
The QueryID is a customised scalar type.
const QueryID = new GraphQLScalarType({
name: 'QueryID',
description: 'query id field',
serialize(dt) {
// value sent to the client
return dt;
},
parseLiteral(ast) {
if (ast.kind === 'IntValue') {
return Number(ast.value);
}
return null;
},
parseValue(v) {
// value from the client
return v;
},
});
client query
query {
quote(queryType: 1)
}
I found that the parseValue method is not called when clients send query to my server. I can see parseLiteral is called correctly.
In most of the document I can find, they use gql to define schema and they need to put scalar QueryID in their schema definition. But in my case, I am using GraphQLSchema object for schema. Is this the root cause of that? If yes, what is the best way to make it works? I don't want to switch to gql format because I need to construct my schema at runtime.
serialize is only called when sending the scalar back to the client in the response. The value it receives as a parameter is the value returned in the resolver (or if the resolver returned a Promise, the value the Promise resolved to).
parseLiteral is only called when parsing a literal value in a query. Literal values include strings ("foo"), numbers (42), booleans (true) and null. The value the method receives as a parameter is the AST representation of this literal value.
parseValue is only called when parsing a variable value in a query. In this case, the method receives as a parameter the relevant JSON value from the variables object submitted along with the query.
So, assuming a schema like this:
type Query {
someField(someArg: CustomScalar): String
someOtherField: CustomScalar
}
serialize:
query {
someOtherField: CustomScalar
}
parseLiteral:
query {
someField(someArg: "something")
}
parseValue:
query ($myVariable: CustomScalar) {
someField(someArg: $myVariable)
}
In an update to our GraphQL API only the models _id field is required hence the ! in the below SDL language code. Other fields such as name don't have to be included on an update but also cannot have null value. Currently, excluding the ! from the name field allows the end user to not have to pass a name in an update but it allows them to pass a null value for the name in, which cannot be allowed.
A null value lets us know that a field needs to be removed from the database.
Below is an example of a model where this would cause a problem - the Name custom scalar doesn't allow null values but GraphQL still allows them through:
type language {
_id: ObjectId
iso: Language_ISO
auto_translate: Boolean
name: Name
updated_at: Date_time
created_at: Date_time
}
input language_create {
iso: Language_ISO!
auto_translate: Boolean
name: Name!
}
input language_update {
_id: ObjectId!
iso: Language_ISO!
auto_translate: Boolean
name: Name
}
When a null value is passed in it bypasses our Scalars so we cannot throw a user input validation error if null isn't an allowed value.
I am aware that ! means non-nullable and that the lack of the ! means the field is nullable however it is frustrating that, as far as I can see, we cannot specify the exact values for a field if a field is not required / optional. This issue only occurs on updates.
Are there any ways to work around this issue through custom Scalars without having to start hardcoding logic into each update resolver which seems cumbersome?
EXAMPLE MUTATION THAT SHOULD FAIL
mutation tests_language_create( $input: language_update! ) { language_update( input: $input ) { name }}
Variables
input: {
_id: "1234",
name: null
}
UPDATE 9/11/18: for reference, I can't find a way around this as there are issues with using custom scalars, custom directives and validation rules. I've opened an issue on GitHub here: https://github.com/apollographql/apollo-server/issues/1942
What you're effectively looking for is custom validation logic. You can add any validation rules you want on top of the "default" set that is normally included when you build a schema. Here's a rough example of how to add a rule that checks for null values on specific types or scalars when they are used as arguments:
const { specifiedRules } = require('graphql/validation')
const { GraphQLError } = require('graphql/error')
const typesToValidate = ['Foo', 'Bar']
// This returns a "Visitor" whose properties get called for
// each node in the document that matches the property's name
function CustomInputFieldsNonNull(context) {
return {
Argument(node) {
const argDef = context.getArgument();
const checkType = typesToValidate.includes(argDef.astNode.type.name.value)
if (checkType && node.value.kind === 'NullValue') {
context.reportError(
new GraphQLError(
`Type ${argDef.astNode.type.name.value} cannot be null`,
node,
),
)
}
},
}
}
// We're going to override the validation rules, so we want to grab
// the existing set of rules and just add on to it
const validationRules = specifiedRules.concat(CustomInputFieldsNonNull)
const server = new ApolloServer({
typeDefs,
resolvers,
validationRules,
})
EDIT: The above only works if you're not using variables, which isn't going to be very helpful in most cases. As a workaround, I was able to utilize a FIELD_DEFINITION directive to achieve the desired behavior. There's probably a number of ways you could approach this, but here's a basic example:
class NonNullInputDirective extends SchemaDirectiveVisitor {
visitFieldDefinition(field) {
const { resolve = defaultFieldResolver } = field
const { args: { paths } } = this
field.resolve = async function (...resolverArgs) {
const fieldArgs = resolverArgs[1]
for (const path of paths) {
if (_.get(fieldArgs, path) === null) {
throw new Error(`${path} cannot be null`)
}
}
return resolve.apply(this, resolverArgs)
}
}
}
Then in your schema:
directive #nonNullInput(paths: [String!]!) on FIELD_DEFINITION
input FooInput {
foo: String
bar: String
}
type Query {
foo (input: FooInput!): String #nonNullInput(paths: ["input.foo"])
}
Assuming that the "non null" input fields are the same each time the input is used in the schema, you could map each input's name to an array of field names that should be validated. So you could do something like this as well:
const nonNullFieldMap = {
FooInput: ['foo'],
}
class NonNullInputDirective extends SchemaDirectiveVisitor {
visitFieldDefinition(field) {
const { resolve = defaultFieldResolver } = field
const visitedTypeArgs = this.visitedType.args
field.resolve = async function (...resolverArgs) {
const fieldArgs = resolverArgs[1]
visitedTypeArgs.forEach(arg => {
const argType = arg.type.toString().replace("!", "")
const nonNullFields = nonNullFieldMap[argType]
nonNullFields.forEach(nonNullField => {
const path = `${arg.name}.${nonNullField}`
if (_.get(fieldArgs, path) === null) {
throw new Error(`${path} cannot be null`)
}
})
})
return resolve.apply(this, resolverArgs)
}
}
}
And then in your schema:
directive #nonNullInput on FIELD_DEFINITION
type Query {
foo (input: FooInput!): String #nonNullInput
}
I'm trying to retrieve all items from a DynamoDB table that match a FilterExpression, and although all of the items are scanned and half do match, the expected items aren't returned.
I have the following in an AWS Lambda function running on Node.js 6.10:
var AWS = require("aws-sdk"),
documentClient = new AWS.DynamoDB.DocumentClient();
function fetchQuotes(category) {
let params = {
"TableName": "quotient-quotes",
"FilterExpression": "category = :cat",
"ExpressionAttributeValues": {":cat": {"S": category}}
};
console.log(`params=${JSON.stringify(params)}`);
documentClient.scan(params, function(err, data) {
if (err) {
console.error(JSON.stringify(err));
} else {
console.log(JSON.stringify(data));
}
});
}
There are 10 items in the table, one of which is:
{
"category": "ChuckNorris",
"quote": "Chuck Norris does not sleep. He waits.",
"uuid": "844a0af7-71e9-41b0-9ca7-d090bb71fdb8"
}
When testing with category "ChuckNorris", the log shows:
params={"TableName":"quotient-quotes","FilterExpression":"category = :cat","ExpressionAttributeValues":{":cat":{"S":"ChuckNorris"}}}
{"Items":[],"Count":0,"ScannedCount":10}
The scan call returns all 10 items when I only specify TableName:
params={"TableName":"quotient-quotes"}
{"Items":[<snip>,{"category":"ChuckNorris","uuid":"844a0af7-71e9-41b0-9ca7-d090bb71fdb8","CamelCase":"thevalue","quote":"Chuck Norris does not sleep. He waits."},<snip>],"Count":10,"ScannedCount":10}
You do not need to specify the type ("S") in your ExpressionAttributeValues because you are using the DynamoDB DocumentClient. Per the documentation:
The document client simplifies working with items in Amazon DynamoDB by abstracting away the notion of attribute values. This abstraction annotates native JavaScript types supplied as input parameters, as well as converts annotated response data to native JavaScript types.
It's only when you're using the raw DynamoDB object via new AWS.DynamoDB() that you need to specify the attribute types (i.e., the simple objects keyed on "S", "N", and so on).
With DocumentClient, you should be able to use params like this:
const params = {
TableName: 'quotient-quotes',
FilterExpression: '#cat = :cat',
ExpressionAttributeNames: {
'#cat': 'category',
},
ExpressionAttributeValues: {
':cat': category,
},
};
Note that I also moved the field name into an ExpressionAttributeNames value just for consistency and safety. It's a good practice because certain field names may break your requests if you do not.
I was looking for a solution that combined KeyConditionExpression with FilterExpression and eventually I worked this out.
Where aws is the uuid. Id is an assigned unique number preceded with the text 'form' so I can tell I have form data, optinSite is so I can find enquiries from a particular site. Other data is stored, this is all I need to get the packet.
Maybe this can be of help to you:
let optinSite = 'https://theDomainIWantedTFilterFor.com/';
let aws = 'eu-west-4:EXAMPLE-aaa1-4bd8-9ean-1768882l1f90';
let item = {
TableName: 'Table',
KeyConditionExpression: "aws = :Aw and begins_with(Id, :form)",
FilterExpression: "optinSite = :Os",
ExpressionAttributeValues: {
":Aw" : { S: aws },
":form" : { S: 'form' },
":Os" : { S: optinSite }
}
};
So I have been working with dynamodb in a nodejs express app and I have a specific table that has a field which is just empty lists and I want to append a string to the list.
The table name is "dev_entrants" and here is an example of the table:
----------------------------------------
primary Key Sort Key
eventID | eventType | entrants
----------------------------------------
Qual-919-w5wm1xhnw | Qual | []
----------------------------------------
So I receive a post request and then route it through express and it comes to a function where after doing type checks and all that I try to add stuff to my table with:
import AWS from 'aws-sdk';
const docClient = new AWS.DynamoDB.DocumentClient({region: 'us-west-1'});
const db = {
docClient
};
...
const entrantsParams = {
'TableName' : 'dev_entrants',
'Key': {
'eventID' : 'Qual-919-w5wm1xhnw',
},
'UpdateExpression' : "SET #attrName = list_append(#attrName, :attrValue)",
'ExpressionAttributeNames' : {
'#attrName' : 'entrants'
},
'ExpressionAttributeValues' : {
':attrValue' : ['joe'],
}
};
const updateEntrantsPromise = db.docClient.update(entrantsParams).promise();
(For the purpose of this example I have replaced variables with the strings they represent)
I have spent 6 hours or so reading through different documentation, as well as on stack overflow trying to find the answer.
The current error i get is the provided key element does not match the schema. If I remove the brackets around the attrValue then I get wrong operand type. I know the key exists in the table as i copied and pasted it from there. Also I am succesfully adding things to the table from another function so my connection is working fine. Can anyone please help me out?
You need to include the eventType in the Key object because your table schema has a sort key. If your table has a sort/partition key then you need to include it along with the primary key. Try it with the following:
const entrantsParams = {
'TableName' : 'dev_entrants',
'Key': {
'eventID' : 'Qual-919-w5wm1xhnw',
'eventType' : 'Qual'
},
'UpdateExpression' : "SET #attrName = list_append(if_not_exists(#attrName, :empty_list), :attrValue)",
'ExpressionAttributeNames' : {
'#attrName' : 'entrants'
},
'ExpressionAttributeValues' : {
':attrValue' : ['joe'],
':empty_list': []
}
};