Github GraphQL Repository Query, commits totalCount - search

How to search for Github Repositories using GraphQL, and get its total commits count as well in return?
It looks strange to me that all fields available describing Repositories contains total count of commit comments but not total count of commits.

Here's an example of how to get the total number of commits for the master branch in the rails/rails repository:
query {
repository(owner:"rails", name:"rails") {
object(expression:"master") {
... on Commit {
history {
totalCount
}
}
}
}
}

Related

MongoDB error: $merge cannot be used in a transaction

I have a transaction operation and I want to make a merge request into a table (that doesn't have a schema)
This is my implementation but it's not working in transactions, I get: $merge cannot be used in a transaction
await User.aggregate([
{
$match: {
_id: new mongoose.mongo.ObjectID(id),
},
},
{
$merge: {
into: 'deleted-users',
},
},
]).option({ session });
is there an alternative to do this scenario which is to add a record in a Newley created collection inside a transaction ?
Seeing that merge is
Excluding the following stages As we can read in official docs:
The following read/write operations are allowed in transactions: (among others) ... aggregate command.
Excluding the following stages: (among others) merge.
And here You have reference to merge and more...

Stream - Get the total number of followers

I've created a feed reference and fetched the followers like so:
var admin = client.feed('user', 'admin');
const res = await admin.followers();
But the returned result contains paginated data.
How can I count the total number of followers?
Will this feature be available or any rough estimation on the roadmap?
Is there any other recommended architecture to get this total count when working with Stream?
Looks like this is not supported yet.
Dwight Gunning wrote on github on 3rd of May, 2018:
Thanks for the interest. This is still on our long-range backlog.
https://github.com/GetStream/stream-django/issues/42
It's now supported with the client.followStats() function:
// get follower and following stats of the feed
client.feed('user', 'me').followStats()
// get follower and following stats of the feed but also filter with given slugs
// count by how many timelines follow me
// count by how many markets are followed
client.feed.followStats({followerSlugs: ['timeline'], followingSlugs: ['market']})
Which returns something like:
{
results: {
followers: { count: 1529, feed: 'user:me' },
followings: { count: 81, feed: 'user:me' }
},
duration: '1.92ms'
}
Here is the API documentation for it:
https://getstream.io/activity-feeds/docs/node/following/?language=javascript#reading-follow-stats

Pass in multiple cursors as variables to GitHub GraphQL API?

I'm looking up organization members based on a list of organization ids. Each org has a paged list of members with an endCursor. Since each endCursor will be different and each org has different numbers of members (and different number of pages), how can I pass in different cursors back as variables? If so, how would each cursor be associated to the org ID from the previous query?
query($orgIds:[ID!]!, $page_cursor:String) { // not sure how to pass in the cursor when different length lists are returned
nodes(ids:$orgIds) {
... on Organization {
id
members(first: 100, after: $page_cursor) {
edges {
node {
id
}
}
pageInfo {
endCursor
hasNextPage
}
}
}
}
I've read http://graphql.org/learn/pagination/ but I'm not seeing anything related to passing in multiple cursors for the same edge list.
I haven't found any details in the graphql specs on how supply an array of cursors for the same edge list. GitHub would have to come up with a custom feature for that. Though I have a feeling it is not quite what you are looking for.
A cursor exist per node, so if you add cursor field to your edges request, you will get the cursors for all nodes within your request.
edges {
cursor
node {
id
}
}
Response would become something like this:
"edges": [
{
"cursor": "Y3Vyc29yOnYyOpLOAANaVM4AA1pU",
"node": {
"id": "MDQ6VXNlcjIxOTczMg=="
}
},
Please note that endCursor is not the same if you change the "first:100" parameter to let's say "first:5", because endCursor would be the last cursor of the last node of the 5 first results.
The only reference you will have from your cursor ID to your orginazation ID, would be that the structure of the object being returned from GitHub's graphql API. Any cursor in your example is a child of a specific organisation.
From my point of view, It would be up to your client to remember that reference if needed afterwards. With that in mind, you might want to simply iterate through the pages of a single organisation, before you go to the next. (supplying only 1 organisation pr request, and not an array).

Get last created object for each user?

I have a collection, say, "Things":
{ id: 1
creator: 1
created: Today }
{ id: 2
creator: 2
created: Today }
{ id: 3
creator: 2
created: Yesterday }
I'd like to create a query that'll return each Thing created by a set of users, but only their most recently created thing.
What would this look like? I can get search my collection with an array of creators and it works just fine - how can I also only get the most recently created object per user?
Thing.find({ _creator : { "$in" : creatorArray })...
You cannot find, sort and pick the most recent in just a single find() query. But you can do it using aggregation:
Match all the records where the creator is amongst the one who we are looking
for.
Sort the records in descending order based on the created field.
Group the documents based on the creator.
Pick each creator's first document from the group, which will also be
his latest.
Project the required fields.
snippet:
Thing.aggregate([
{$match:{"creator":{$in:[1,2]}}},
{$sort:{"created":-1}},
{$group:{"_id":"$creator","record":{$first:"$$ROOT"}}},
{$project:{"_id":0,
"id":"$record.id",
"creator":"$record.creator",
"created":"$record.created"}}
], function(err,data){
})

How to bulk save an array of objects in MongoDB?

I have looked a long time and not found an answer. The Node.JS MongoDB driver docs say you can do bulk inserts using insert(docs) which is good and works well.
I now have a collection with over 4,000,000 items, and I need to add a new field to all of them. Usually mongodb can only write 1 transaction per 100ms, which means I would be waiting for days to update all those items. How can I do a "bulk save/update" to update them all at once? update() and save() seem to only work on a single object.
psuedo-code:
var stuffToSave = [];
db.collection('blah').find({}, function(err, stuff) {
stuff.toArray().forEach(function(item)) {
item.newField = someComplexCalculationInvolvingALookup();
stuffToSave.push(item);
}
}
db.saveButNotSuperSlow(stuffToSave);
Sure, I'll need to put some limit on doing something like 10,000 at once to not try do all 4 million at once, but i think you get the point.
MongoDB allows you to update many documents that match a specific query using a single db.collection.update(query, update, options) call, see the documentation. For example,
db.blah.update(
{ },
{
$set: { newField: someComplexValue }
},
{
multi: true
}
)
The multi option allows the command to update all documents that match the query criteria. Note that the exact same thing applies when using the Node.JS driver, see that documentation.
If you're performing many different updates on a collection, you can wrap them all in a Bulk() builder to avoid some of the overhead of sending multiple updates to the database.

Resources