Github Repo Stats from Events API - github-api

Is it possible to replicate the Contributors, Commits, and Code Frequency stats per the Insights tab of a repo using the Github Events API? I know there are direct endpoints for retrieving these stats from the API, but I'm interested in how they can be calculated from various Event Types -- i.e., PullRequestEvent, PushEvent, etc.
Thanks!

Related

How to use Push Notifications in Acumatica?

We are trying to implement push notification with webhook, so we know that GI created on top of SQL view will not support this, but will it support a projection DAC?
Yes, it supports projection DAC.
Acumatica Framework monitors its cache. And in case if changes happened to be with usage of Acumatica cache, then Push Notifications will work.
But if you will make SQL and then make PXProjection, then cache of Acumatica still will not be used, hence GI will not be able to track notifications.
IMHO I'd suggest you to create multiple GI which are used for SQL view, and collect information for multiple tables via multiple GIs.
BTW, the same is true regarding PXDataBase.Update and PXDatabase.Insert and PXDataBase.Delete. If anywhere in the code one of these is used, then it will affect how cache of Acumatica will refresh and push notifications as well. Both of them are dependent of usage of cache.

azure data explorer for activity logging

We are looking to provide a historic activity log on objects in our system(similar to jira's history tab). We are looking at Azure Data Explorer as a potential tool for addressing this usecase.
sample queries we need to answer:
give me all objects that have changed in the last 30 days.
give me all objects that have changed in the last 30 days that have value of key1 set to value1.
give me all the objects that userA changed in the last year.
The amount of data(objects) we have is huge(could be tens of millions), but the activity itself is not, and will not be in a streaming format for sure. Is this a right usecase for using Azure Data Explorer?
Yes, Azure Data Explorer (Kusto) is the ideal cloud service for this functionality. You can learn more about the service by watching the recent online event and using the quick starts in the docs.

Bringing a MS Graph Search Custom Connector into working mode

Recently Microsoft published the Microsoft Search API (beta) which provides the possibility to index external systems by creating an MS Graph search custom connector.
I created such a connector that was successful so far. I also pushed a few items to the index and in the MS admin center, I created a result type and a vertical. Now I'm able to find the regarded external items in the SharePoint Online modern search center in a dedicated tab belonging to the search vertical created before. So far so good.
But now I wonder:
How can I achieve that the external data is continuously pushed to the MS Search Index? (How can this be implemented? Is there any tutorial or a sample project? What is the underlying architecture?)
Is there a concept of Full / Incremental / Continuous Crawls for a Search Custom Connector at all? If so, how can I "hook" into a crawl in order to update changed data to the index?
Or do I have to implement it all on my own? And if so, what would be a suitable approach?
Thank you for trying out the connector APIs. I am glad to hear that you are able to get items into the index and see the results.
Regarding your questions, the logic for determining when to push items, and your crawl strategy is something that you need to implement on your own. There is no one best strategy per se, and it will depend on your data source and the type of access you have to that data. For example, do you get notifications every time the data changes? If not, how do you determine what data has changed? If none of that is possible, you might need to do a periodic full recrawl, but you will need to consider the size of your data set for ingestion.
We will look into ways to reduce the amount of code you have to write in the future, but right now, this is something you have to implement on your own.
-James
I recently implemented incremental crawling for Graph connectors using Azure functions. I created a timer triggered function that fetches the items updated in the data source since the time of the last function run and then updates the search index with the updated items.
I also wrote a blog post around this approach considering a SharePoint list as the data source. The entire source code can be found at https://github.com/aakashbhardwaj619/function-search-connector-crawler. Hope it would be useful.

Microsoft Bot Framework - very high response times

I have 10 seconds response times through any channel. (WebChat and Facebook)
My endpoint is a PAAS instance located in the western United States.
The WebApp has an S3 size and the response times are constant (even if there is only one conversation).
I have the following questions:
Is there any way to optimize this?
What are the Azure Bot Framework SLAs?
As bot framework is a preview product, there is no current SLAs.
Are you using the default state storage? If so, part of the slow down you mentioned is probably related. We highly recommend implementing your own state service. There is a blog article here discussing the implementations there is also a repository here with samples. This is probably not 100% of your issue but it is probably at least part of it.
Another thing to keep in mind is where your bot is located in relationship to your WebChat client and what instance of the Bot Connector you are using this blog may provide more info. Please see the "Geographic Direct Line endpoints" section.

REST API with versioned data and differential endpoint : optimizing bandwidth and performance

My NodeJS project is based on SailsJS, itself using ExpressJS.
Its API will be used by mobile apps to fetch their data from it.
The tricky part is I don't want the client apps to fetch the whole data tree every time there is a change in the database.
The client only needs to download a differential between the data it's already got and the data on the server.
To achieve that I thought of using git on the server. That is create a repository and save all endpoints as a json file in the repo. Each save will trigger an automatic commit.
Then I could create a specific API endpoint that will accept a commit sha as a parameter and return a diff between that and git HEAD.
This post by William Benton comforted me with this idea.
I'm now looking for any tips that could help me get this working based on the language and frameworks cited above :
I'd like to see a proof of concept of this in action but couldn't find one
I couldn't find an easy way to use git with NodeJS yet.
I'm not sure how to parse the returned diff on client apps developed with the IONIC framework, so AngularJS.
Note : The api will only be readable. All DB movement will be triggered by a custom web back-end used by few users.
I used the ideas in that post for an experimental configuration-management service. That code is in Erlang and I can't offer Node-specific suggestions, but I have some general advice.
Calling out to git itself wasn't a great option at the time from any of the languages I was interested in using. Using git as a generic versioned-object store actually works surprisingly well, but using git plumbing commands is a pain (and slow, due to all of the forking) and there were (again, at the time) limitations to all of the available native git libraries.
I wound up implementing my own persistent trie data structure and put a git-like library interface atop it. The nice thing about doing this is that your diffs can be sensitive to the data format you're storing; if you call out to git, you're stuck with finding a serialization format for your data that is amenable to standard diffs. (Without a diffable format, though, you can still send back a sequence of operations to the client to replay on whatever stale objects they have.)

Resources