Hello I am trying to query google datastore entries from the node.js api. I have an entity which has an owner (string), a start time (date time) and an end time (date time) I am trying to query for all entities which match the given owner string and start after a given date with the following function (es2016).
static async getAvailability (owner, month = currentMonth) {
const firstOfMonth = moment([currentYear, month])
const query = datastore.createQuery('availability')
.filter('owner', '=', owner)
.filter('end', '>', firstOfMonth.toDate().toJSON())
.order('end', {
descending: true
})
try {
// promise version of run query same function
const result = await datastore.runQueryAsync(query)
return result.map(result => {
const { key, data } = result
data._id = key.id
return data
})
} catch (e) {
console.log('error', e.stack)
return []
}
}
index.yaml
indexes
- kind: availability
properties:
- name: owner
- name: start
direction: desc
- name: end
direction: desc
I am getting the error precondition failed error when i run the query. If there is any more information i can provide I would be more than happy.
Your query listed is only on owner and end. When you are using Cloud Datastore, the index you use has to exactly match the query.
In the case of the query you listed, you need the index:
- kind: availability
properties:
- name: owner
- name: end
direction: desc
If you actually wanted your start date to be a specific time, your filter would have to be:
.filter('start', '>', firstOfMonth.toDate().toJSON())
And you would have to specify it first in your orders:
.order('start')
.order('end', {
descending: true
})
Related
I have the following schema where I am basically just trying to have a table with id as primary key, and both code and secondCode to be global secondary indexes to use to query the table.
resource "aws_dynamodb_table" "myDb" {
name = "myTable"
billing_mode = "PAY_PER_REQUEST"
hash_key = "id"
attribute {
name = "id"
type = "S"
}
attribute {
name = "code"
type = "S"
}
attribute {
name = "secondCode"
type = "S"
}
global_secondary_index {
name = "code-index"
hash_key = "code"
projection_type = "ALL"
}
global_secondary_index {
name = "second_code-index"
hash_key = "secondCode"
projection_type = "ALL"
}
}
When I try to look for one item by code
const toGet = Object.assign(new Item(), {
code: 'code_456',
});
item = await dataMapper.get<Item>(toGet);
locally I get
ValidationException: The number of conditions on the keys is invalid
and on the deployed instance of the DB I get
The provided key element does not match the schema
I can see from the logs that the key is not being populated
Serverless: [AWS dynamodb 400 0.082s 0 retries] getItem({ TableName: 'myTable', Key: {} })
Here is the class configuration for Item
#table(getEnv('MY_TABLE'))
export class Item {
#hashKey({ type: 'String' })
id: string;
#attribute({
indexKeyConfigurations: { 'code-index': 'HASH' },
type: 'String',
})
code: string;
#attribute({
indexKeyConfigurations: { 'second_code-index': 'HASH' },
type: 'String',
})
secondCode: string;
#attribute({ memberType: embed(NestedItem) })
nestedItems?: Array<NestedItem>;
}
class NestedItem {
#attribute()
name: string;
#attribute()
price: number;
}
I am using https://github.com/awslabs/dynamodb-data-mapper-js
I looked at the repo you linked for the package, I think you need to use the .query(...) method with the indexName parameter to tell DynamoDB you want to use that secondary index. Usuallly in DynamoDB, get operations use the default keys (in your case, you'd use get for queries on id, and query for queries on indices).
Checking the docs, it's not very clear - if you look at the GetItem reference, you'll see there's nowhere to supply an index name to actually use the index, whereas the Query operation allows you to supply one. As for why you need to query this way, you can read this: https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/HowItWorks.CoreComponents.html
The issue you are facing is due to calling a GetItem on an index, which is not possible. A GetItem must target a single item and an index can contain multiple items with the same key (unlike the base table), for this reason you can only use multi-item APIs on an index which are Query and Scan.
i'm having trouble with node & knex.js
I'm trying to build a mini blog, with posts & adding functionality to add multiple tags to post
I have a POST model with following properties:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT,
Second I have Tags model that is used for storing tags:
id SERIAL PRIMARY KEY NOT NULL,
name TEXT
And I have many to many table: Post Tags that references post & tags:
id SERIAL PRIMARY KEY NOT NULL,
post_id INTEGER NOT NULL REFERENCES posts ON DELETE CASCADE,
tag_id INTEGER NOT NULL REFERENCES tags ON DELETE CASCADE
I have managed to insert tags, and create post with tags,
But when I want to fetch Post data with Tags attached to that post I'm having a trouble
Here is a problem:
const data = await knex.select('posts.name as postName', 'tags.name as tagName'
.from('posts')
.leftJoin('post_tags', 'posts.id', 'post_tags.post_id')
.leftJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
Following query returns this result:
[
{
postName: 'Post 1',
tagName: 'Youtube',
},
{
postName: 'Post 1',
tagName: 'Funny',
}
]
But I want the result to be formated & returned like this:
{
postName: 'Post 1',
tagName: ['Youtube', 'Funny'],
}
Is that even possible with query or do I have to manually format data ?
One way of doing this is to use some kind of aggregate function. If you're using PostgreSQL:
const data = await knex.select('posts.name as postName', knex.raw('ARRAY_AGG (tags.name) tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first();
->
{ postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
For MySQL:
const data = await knex.select('posts.name as postName', knex.raw('GROUP_CONCAT (tags.name) as tags'))
.from('posts')
.innerJoin('post_tags', 'posts.id', 'post_tags.post_id')
.innerJoin('tags', 'tags.id', 'post_tags.tag_id')
.where('posts.id', id)
.groupBy("postName")
.orderBy("postName")
.first()
.then(res => Object.assign(res, { tags: res.tags.split(',')}))
There are no arrays in MySQL, and GROUP_CONCAT will just concat all tags into a string, so we need to split them manually.
->
RowDataPacket { postName: 'post1', tags: [ 'tag1', 'tag2', 'tag3' ] }
The result is correct as that is how SQL works - it returns rows of data. SQL has no concept of returning anything other than a table (think CSV data or Excel spreadsheet).
There are some interesting things you can do with SQL that can convert the tags to strings that you concatenate together but that is not really what you want. Either way you will need to add a post-processing step.
With your current query you can simply do something like this:
function formatter (result) {
let set = {};
result.forEach(row => {
if (set[row.postName] === undefined) {
set[row.postName] = row;
set[row.postName].tagName = [set[row.postName].tagName];
}
else {
set[row.postName].tagName.push(row.tagName);
}
});
return Object.values(set);
}
// ...
query.then(formatter);
This shouldn't be slow as you're only looping through the results once.
What is the best approach to do a batch update or transaction, that reads a value of the first update, then uses this value to make further updates?
Here is an example:
//create person
const id = await db
.collection("person")
.add({ ...person })
.then(ref => ref.id)
//then do a series of updates
let batch = db.batch()
const private_doc = db
.collection("person")
.doc(id)
.collection("private")
.doc("data")
batch.set(private_doc, {
last_modified,
version: 1,
versions: []
})
const some_index = db.collection("data").doc("some_index")
batch.update(some_index, {
[id]: { first_name: person.first_name, last_name: person.last_name, last_modified }
})
const another_helpful_doc = db.collection("some_other_collection").doc("another_helpful_doc")
batch.update(another_helpful_doc, {
[id]: { first_name: person.first_name, last_name: person.last_name, image: person.image }
})
return batch.commit().then(() => {
person.id = id
return person
})
You can see here if there is an error any of the batch updates, the person doc will still be created - which is bad. I could add in a catch to delete the person doc if anything fails, however interested to see if this is possible with transactions or batches.
You can call the doc() method, without specifying any path, in order to create a DocumentReference with an auto-generated ID and, then, use the reference later. Note that the document corresponding to the DocumentReference is NOT created.
So, the following would do the trick, since all the writes/updates are included in the batched write:
const new_person_ref = db.collection("person").doc();
const id = new_person_ref.id;
let batch = db.batch()
batch.set(new_person_ref, { ...person })
const private_doc_ref = db // <- note the addition of ref to the variable name, it could help avoiding errors, as this is not a DocumentSnapshot but a DocumentReference.
.collection("person")
.doc(id)
.collection("private")
.doc("data")
batch.set(private_doc_ref, {
last_modified,
version: 1,
versions: []
})
//....
db.collection("resource").update({name: name}, {
name: name,
type: type
}, {
upsert: true
}
I differentiate documents by their names. I do not add document if it exists with the same. But I want to warn user by saying "It already exists, operation failed" How can I achieve it?
It sounds like you want to insert documents, not update or insert documents.
1: Add unique index on resource.name ahead of time.
db.resouces.createIndex({ name: 1 }, { unique: true })
Important: do this once, not on every request.
See mongodb create index docs.
2: Use insert instead of update + upsert.
It sounds like you want to actually insert a document, and get an error if there is a duplicate key.
db.resources.insert({ name: "AJ" }) // ok
db.resources.insert({ name: "AJ" }) // error!
You will get a duplicate key error on the second insert. Error code 11000.
See mongodb docs on insert.
3: Use promise-try-catch in javascript.
The code to do error checking looks like:
var db = require("mongojs")(DATABASE_URL, [ "resources" ])
var duplicateKey = function (err) {
return err.code == "11000"
}
db.resources.insert({ name: name })
.then(function () {
// success!
})
.catch(duplicateKey, function () {
// sorry! name is taken
})
I am writing a custom app to track iteration progress by day. Is there a builtin way in Rally to get the number of user stories that are in the "Accepted" state for a specific date, and the number of points (or do I have to get all user stories and parse their revision histories)?
There is IterationCumulativeFlowData object in WS API, which is populated at midnight of the Workspace Timezone when the Data Collection runs on workdays specified in the Workspace Setup screen.
Data is stored for each day of the Iteration and a corresponding state. There is CumulativeFlowData object for Day 1 of the Iteration for everything in a Defined state, Day 1 of Release for everything in an In-Progress state, etc.
The CumulativeFlowData object also stores CardEstimateTotal which is the sum of the estimates of cards in every state.
Here is a example of an app written with rally-node that returns iteration data for specific state (Accepted) as of the last day of the iteration.
In this examle the CreationDate of the last result is '2013-08-27T06:00:00.000Z, while the EndDate of the iteration in question was 2013-08-27 11:59:59 PM America/Denver (which is 2013-08-28T05:59:59.000Z), so I had to manipulate a date in order to make this query condition return the data for the last day of the iteration:
query = query.and('CreationDate', '>', endDateMinusOneDay);
Here is the full js file of the example:
var rally = require('rally'),
queryUtils = rally.util.query,
restApi = rally({
user: 'user#co.com',
pass: 'secret',
apiVersion: 'v2.0',
server: 'https://rally1.rallydev.com',
requestOptions: {
headers: {
'X-RallyIntegrationName': 'My cool node.js program',
'X-RallyIntegrationVendor': 'My company',
'X-RallyIntegrationVersion': '1.0'
},
}
});
function findIteration() {
return restApi.query({
type: 'Iteration',
start: 1,
pageSize: 2,
limit: 10,
fetch: ['ObjectID', 'EndDate'],
scope: {
project: '/project/12352608219',
up: false,
down: false
},
query: queryUtils.where('Name', '=', 'i777')
});
}
function queryIterationData(result) {
var endDate = result.Results[0].EndDate,
oid = result.Results[0].ObjectID;
console.log('endDate',endDate);
var date1 = new Date(endDate);
var ms = date1.getTime() - 86400000; //86400000 is the number of milliseconds in a day
var date2 = new Date(ms);
var endDateMinusOneDay = date2.toISOString();
console.log('date2 ISO', date2.toISOString());
var query = queryUtils.where('IterationObjectID', '=',oid );
query = query.and('CardState', '=', 'Accepted');
query = query.and('CreationDate', '>', endDateMinusOneDay);
return restApi.query({
type: 'IterationCumulativeFlowData',
fetch: ['CardCount', 'CardEstimateTotal', 'CardState', 'CardState', 'CreationDate'],
query: query,
});
}
function onSuccess(result) {
console.log('Success!', result);
}
function onError(errors) {
console.log('Failure!', errors);
}
findIteration()
.then(queryIterationData)
.then(onSuccess)
.fail(onError);
It returns: