Azure Storage Table Query - result vs response - node.js

I'm using node.js as my Server and have an account on Azure where my storage table resides. I'm retrieving all records for a specific partition by using the following :
var query= new azure.TableQuery().where('PartitionKey eq ?',username);
tableSvc.queryEntities(localTableName,query, null, function(error, result, response) {
}
When this call comes back, I want to access the values for the rest of the fields of table. But when I do that using result.entries, it kinda looks weird. Alternatively I think I can access the results via response.body.value.userID.
Here is how the structure of "result.entries" vs "response" object looks like:
result.entries :
[ { PartitionKey: { '$': 'Edm.String', _: '048tfbne' },
RowKey: { '$': 'Edm.String', _: '145610564488450166' },
Timestamp:
{ '$': 'Edm.DateTime',
_: Mon Feb 22 2016 01:47:26 GMT+0000 (UTC) },
username: { _: '048tfbne' },
userID: { _: '145610564488450166' },
deleteAfter: { _: 'not set yet' },
'.metadata': { etag: 'W/"datetime\'2016-02-22T01%3A47%3A26.4394133Z\'"' } } ]
response :
{ isSuccessful: true,
statusCode: 200,
body:
{ 'odata.metadata': 'https://photoshareuserdata.table.core.windows.net/$metadata#userIdentifier',
value:
[ { 'odata.etag': 'W/"datetime\'2016-02-22T01%3A47%3A26.4394133Z\'"',
PartitionKey: '048tfbne',
RowKey: '145610564488450166',
Timestamp: '2016-02-22T01:47:26.4394133Z',
username: '048tfbne',
userID: '145610564488450166',
deleteAfter: 'not set yet' } ] },
I thought results.entries would be a better way to access the records, but I am sort of weirded out by the nested objects and Edm.String here.
Which is a better way to access the records ?

Table Node Sample shows how to access entities in a table as result of a query. See method "runPageQuery".

Actually, according the official Section: Query a set of entities, there is a paragraph as following:
If successful, result.entries will contain an array of entities that match the query. If the query was unable to return all entities, result.continuationToken will be non-null and can be used as the third parameter of queryEntities to retrieve more results.
And we also can refer to the sample at Azure-storage-for-node repository on GitHub. Which has told us the answer.

Related

CosmosDB: JSON reader was expecting a value but found 'db'

I have a mongodb update query that works when I run it in the shell but the same is throwing an error when I run the query in Query shell of azure cosmos-db. Initially I used the following query.
db.sample.update(
{
_id: ObjectId("690655905jgj580")
},
{
$set: {
user_data: {
eid: "E9076",name: "Jhon",email:"Jhon#xyz.com", posted_on: "1621509348056"
},
}
}
)
The above mentioned query fails to update the record via Query Shell and gives an error.
After looking into other related questions I found that both the key and value must be in quotes so I modified the query as follows. But, still I'm getting the same error: JSON reader was expecting a value but found 'db'
db.sample.update(
{
"_id": ObjectId("690655905jgj580")
},
{
"$set": {
"user_data": {
"eid": "E9076","name": "Jhon","email":"Jhon#xyz.com",
"posted_on": "1621509348056"
},
}
}
)
Can someone help me figure out what's wrong here?

MongoDB aggregation $group stage by already created values / variable from outside

Imaging I have an array of objects, available before the aggregate query:
const groupBy = [
{
realm: 1,
latest_timestamp: 1318874398, //Date.now() values, usually different to each other
item_id: 1234, //always the same
},
{
realm: 2,
latest_timestamp: 1312467986, //actually it's $max timestamp field from the collection
item_id: 1234,
},
{
realm: ..., //there are many of them
latest_timestamp: ...,
item_id: 1234,
},
{
realm: 10,
latest_timestamp: 1318874398, //but sometimes then can be the same
item_id: 1234,
},
]
And collection (example set available on MongoPlayground) with the following schema:
{
realm: Number,
timestamp: Number,
item_id: Number,
field: Number, //any other useless fields in this case
}
My problem is, how to $group the values from the collection via the aggregation framework by using the already available set of data (from groupBy) ?
What have been tried already.
Okay, let skip crap ideas, like:
for (const element of groupBy) {
//array of `find` queries
}
My current working aggregation query is something like that:
//first stage
{
$match: {
"item": 1234
"realm" [1,2,3,4...,10]
}
},
{
$group: {
_id: {
realm: '$realm',
},
latest_timestamp: {
$max: '$timestamp',
},
data: {
$push: '$$ROOT',
},
},
},
{
$unwind: '$data',
},
{
$addFields: {
'data.latest_timestamp': {
$cond: {
if: {
$eq: ['$data.timestamp', '$latest_timestamp'],
},
then: '$latest_timestamp',
else: '$$REMOVE',
},
},
},
},
{
$replaceRoot: {
newRoot: '$data',
},
},
//At last, after this stages I can do useful job
but I found it a bit obsolete, and I already heard that using [.mapReduce][1] could solve my problem a bit faster, than this query. (But official docs doesn't sound promising about it) Does it true?
As for now, I am using 4 or 5 stages, before start working with useful (for me) documents.
Recent update:
I have checked the $facet stage and I found it curious for this certain case. Probably it will help me out.
For what it's worth:
After receiving documents after the necessary stages I am building a representative cluster chart, that you may also know as a heatmap
After that I was iterating each document (or array of objects) one-by-one to find their correct x and y coordinated in place which should be:
[
{
x: x (number, actual $price),
y: y (number, actual $realm),
value: price * quantity,
quantity: sum_of_quantity_on_price_level
}
]
As for now, it's old awful code with for...loop inside each other, but in the future, I will be using $facet => $bucket operators for that kind of job.
So, I have found an answer to my question in another, but relevant way.
I was thinking about using $facet operator and to be honest, it's still an option, but using it, as below is a bad practice.
//building $facet query before aggregation
const ObjectQuery = {}
for (const realm of realms) {
Object.assign(ObjectQuery, { `${realm.name}` : [ ... ] }
}
//mongoose query here
aggregation([{
$facet: ObjectQuery
},
...
])
So, I have chosen a $project stage and $switch operator to filter results, such as $groups do.
Also, using MapReduce could also solve this problem, but for some reason, the official Mongo docs recommends to avoid using it, and choose aggregation: $group and $merge operators instead.

How to use scope in loopback filter in json format

I am trying to make call from my angular service to loopback api. I have a parcelStatuses collection that contains a parcelId so i am able to include parcel collection too but I also need to check against a particular vendorId and that vendorId exists in parcel collection. I am trying to make use of scope to check against particular vendorId but i think i am not writing correct json syntax/call. Here is my function inside service
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
"where": {
"and": [{"statusRepositoryId": filter}]
},
"include": [
{
"parcel": [
{
"scope": {"vendorId": vendorId}
},
"parcelStatuses",
{"customerData":"customer"}
]
}
],
"limit": limit,
"skip": skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Here is my demo view of parcelStatus collection object
[{
"id":"lbh24214",
"statusRepositoryId":"3214fsad",
"parcelId":"LH21421"
}]
Demo json of parcel
[{
"id":"LHE21421",
"customerDataId":"214fdsas",
"customerId":"412dsf",
"vendorId":"123421"
}]
Please help me with writing correct call
Formatting aside, there's several issues with the query:
Unnecessary and
This line:
where: {
and: [{statusRepositoryId: filter}]
}
Can be simplified to:
where: {
statusRepositoryId: filter
}
As there is only 1 where condition, and becomes redundant.
Misuse of include and scope
include is used to include relations while scope applies filters to those relations. They can work in tandem to create a comprehensive query:
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
This will include the parcels relation as part of the response, while filtering the parcels relation with a where filter.
That means the final code should look similar to the following:
private getParcelsByFilter(
limit: number,
skip: number,
vendorId: string,
filter: string
) {
const checkFilter = {
where: {statusRepositoryId: filter},
include: [
{
relation: "parcels",
scope: {
where: {vendorId: vendorId},
}
}
],
limit: limit,
skip: skip,
}
return this._http.get<IParcel[]>(
`${environment.url}/ParcelStatuses?filter=${encodeURIComponent(JSON.stringify(checkFilter))}`
);
}
Further reading
Please review these resources to get a better understanding on how to use filters.
https://loopback.io/doc/en/lb4/Include-filter.html

id cannot be used in graphQL where clause?

{
members {
id
lastName
}
}
When I tried to get the data from members table, I can get the following responses.
{ "data": {
"members": [
{
"id": "TWVtYmVyOjE=",
"lastName": "temp"
},
{
"id": "TWVtYmVyOjI=",
"lastName": "temp2"
}
] } }
However, when I tried to update the row with 'id' where clause, the console shows error.
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: 3
}
}
) {
affectedCount
clientMutationId
}
}
"message": "Unknown column 'NaN' in 'where clause'",
Some results from above confused me.
Why the id returned is not a numeric value? From the db, it is a number.
When I updated the record, can I use numeric id value in where clause?
I am using nodejs, apollo-client and graphql-sequelize-crud
TL;DR: check out my possibly not relay compatible PR here https://github.com/Glavin001/graphql-sequelize-crud/pull/30
Basically, the internal source code is calling the fromGlobalId API from relay-graphql, but passed a primitive value in it (e.g. your 3), causing it to return undefined. Hence I just removed the call from the source code and made a pull request.
P.S. This buggy thing which used my 2 hours to solve failed in build, I think this solution may not be consistent enough.
Please try this
mutation {
updateMembers(
input: {
values: {
email: "testing#test.com"
},
where: {
id: "3"
}
}
) {
affectedCount
clientMutationId
}
}

Create subscription with addon using node-recurly

Using node-recurly, I can create a subscription object and pass it to recurly.subscriptions.create call:
const subscription = {
plan_code: plan.code,
currency: 'USD',
account: {
account_code: activationCode,
first_name: billingInfo.first_name,
last_name: billingInfo.last_name,
email: billingInfo.email,
billing_info: {
token_id: paymentToken,
},
},
};
I would also like to add subscription_add_ons property, which, looking at the documentation, supposed to be an array of add-ons. I tried passing it like this:
subscription_add_ons: [
{
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
},
],
The server returned an error:
Tag <subscription_add_ons> must consist only of sub-tags named
<subscription_add_on>
I attempted this:
subscription_add_ons: [
{
subscription_add_on: {
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
},
},
],
Got back this error:
What's the proper format to pass subscription add on in this scenario?
The proper format is:
subscription_add_ons: {
subscription_add_on: [{
add_on_code: shippingMethod.servicelevel_token,
unit_amount_in_cents: parseFloat(shippingMethod.amount) * 100,
}],
},
I ended up doing this which works whether you have 1 add-on or multiple add-ons. subscription_add_ons is an array which can contain 1 or more subscription add ons. I then send over the details (along with other info) in the subscription update call. This is similar to what you attempted in your original post so I'm not sure why that didn't work for you.
details.subscription_add_ons = [
{ subscription_add_on: {add_on_code: "stream", quantity: 3} },
{ subscription_add_on: {add_on_code: "hold", quantity: 2} }
];

Resources