gitlab api How to get Last Commit?
GET /projects/:id/repository/tree
{
"name": "assets",
"type": "tree",
"mode": "040000",
"id": "6229c43a7e16fcc7e95f923f8ddadb8281d9c6c6"
},?
How to get logs_tree? Last Commit?
Since at least version 12.10 GitLab supports pagination.
That's why the call return only one commit.
GET /api/v4/projects/:id/repository/commits?per_page=1&page=1
At the current version of API we have only one way to solve this problem
GET /api/v4/projects/:id/repository/commits
The first commit in the array will be the desired one. You can extract it with jq '.[0]'
You can use this API.
GET /projects/:id/repository/branches/:branch
The result of this API includes the latest commit of the branch.
https://docs.gitlab.com/ee/api/branches.html#get-single-repository-branch
I would recommend follow the spec listed here which says that you can use
GET /projects/:id/repository/commits/tree
to return the following example data:
{
"id": "6104942438c14ec7bd21c6cd5bd995272b3faff6",
"short_id": "6104942438c",
"title": "Sanitize for network graph",
"author_name": "randx",
"author_email": "dmitriy.zaporozhets#gmail.com",
"created_at": "2012-09-20T09:06:12+03:00",
"message": "Sanitize for network graph",
"committed_date": "2012-09-20T09:06:12+03:00",
"authored_date": "2012-09-20T09:06:12+03:00",
"parent_ids": [
"ae1d9fb46aa2b07ee9836d49862ec4e2c46fbbba"
],
"status": "running"
}
This is the latest commit. In terms of finding logs_tree the full documentation may help you
Related
I am leveraging the latest iteration of Azure's Custom question answering module in language studio in an external app that I've created, and I cannot figure out how to receive the actual source when the question is answered. I don't know if that's because you just can't right now or what, but in the actual API docs for the request/answer sample, the answer sample includes the source field - no matter what I've tried, I can't get it to show up.
Page where API doc is found - https://learn.microsoft.com/en-us/rest/api/cognitiveservices/questionanswering/question-answering/get-answers#knowledgebaseanswer
Quick example snippet of how I've adapted the API:
{
"question": "<question>",
"top": 3,
"userId": "<user>",
"confidenceScoreThreshold": 0.2,
"rankerType": "Default",
"filters": {
"metadataFilter": {
"metadata": [
],
},
},
"answerSpanRequest": {
"enable": true,
"confidenceScoreThreshold": 0.2,
"topAnswersWithSpan": 1
},
},
"includeUnstructuredSources": true
}
I understand the metadata bit has nothing there, I may add something later but as of now I'm not messing with the metadata aspect in language studio sources themselves.
At any rate, the bottom line is I don't see an option to display a source, and I don't get it back in the body of the request - yet I see it in the sample response in the API doc, so what gives, am I missing something?
Given the URL https://github.com/foo/bar, I want to be able to get all the repos for foo. If foo is a user, I need to call api.repos.getForUser({username: 'foo'}) and if an org, I'll need to call api.repos.getForOrg({org: 'foo'}).
My problem is: how can I tell if "foo" is an org or a user?
Right now, I "solve" it in the costly way of trying to get an org called "foo", if I got it, I try to get its repos; if I end up getting an exception (I use promises) and if the code of the exception is "404", I assume "foo" is a user, and try to get user repos.
This is obviously inefficient, and has the side effect of adding calls that may trigger rate limit.
Is there an easier way to know whether "foo" is a user or an org?
As we all know, handling exceptions is costly. So instead of trying to get an org and handling the 404, you could instead look at the type property of the response to https://api.github.com/users/<username> in order to determine if the "user" is a user or organization and then proceed accordingly.
For example a call to my GitHub user API https://api.github.com/users/raghav710 returns
{
"login": "raghav710",
...
"type": "User",
...
}
And a call to an organization like https://api.github.com/users/Microsoft returns
{
"login": "Microsoft",
...
"type": "Organization",
...
}
Update: Doing it in a single call
I understand that you are already trying to access a URL https://github.com/<user or organization>/<repo name> and therein trying to get all the repos of that user or organization.
A suggestion is, instead of trying to do a GET on the above link, you could do a GET on https://api.github.com/repos/<user or organization>/<repo name>
For example doing a GET on https://api.github.com/repos/Microsoft/vscode
Gives
{
"id": 41881900,
"name": "vscode",
"full_name": "Microsoft/vscode",
"owner": {
"login": "Microsoft",
"id": 6154722,
...
"type": "Organization",
},
"private": false,
"html_url": "https://github.com/Microsoft/vscode",
...
}
As you can see the familiar type field is available under the owner key. You can use this to decide your next call
So even if there's an API call to differentiate, you'll still need that call to determine whether it's a user or not.
Considering the fact that most repositories should belong to users, why don't you reverse the checks? Try to retrieve the user repository, if that fails get the organization repository.
In addition, I'd cache as much information as necessary. Don't retrieve everything in real time while users use your tool. Many things won't change that often, if they're even able to be changed.
I have a custom DSC module which is class based. During the initial sync process the target machine tried to generate a MOF in C:\Windows\System32\dsc which results in an error - this causes the initial sync to report as failed, even though all the individual configuration resource tasks show as succeeded. The ones that are based on the resource who's MOF was not generated report as succeeded, but in fact have not executed at all.
This is the error:
{
"JobId": "4deeaf52-aa56-11e6-a940-000d3ad04eaa",
"OperationType": "Initial",
"ReportFormatVersion": "2.0",
"ConfigurationVersion": "2.0.0",
"StartTime": "2016-11-14T21:37:14.2770000+11:00",
"Errors": [{
"ErrorSource": "DSCPowershellResource",
"Locale": "en-US",
"Errors": {
"Exception": {
"Message": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"Data": {
},
"InnerException": null,
"TargetSite": null,
"StackTrace": null,
"HelpLink": null,
"Source": null,
"HResult": -2146233079
},
"TargetObject": null,
"CategoryInfo": {
"Category": 6,
"Activity": "",
"Reason": "InvalidOperationException",
"TargetName": "",
"TargetType": ""
},
"FullyQualifiedErrorId": "ProviderSchemaNotFound",
"ErrorDetails": null,
"InvocationInfo": null,
"ScriptStackTrace": null,
"PipelineIterationInfo": []
},
"ErrorCode": "6",
"ErrorMessage": "Could not find the generate schema file dsc\tBlobSync.1.4.tAzureStorageFileSync.schema.mof.",
"ResourceId": "[tAzureStorageFileSync]CDrive"
}],
"StatusData": [],
"AdditionalData": [{
"Key": "OSVersion",
"Value": {
"VersionString": "MicrosoftWindowsNT10.0.14393.0",
"ServicePack": "",
"Platform": "Win32NT"
}
},
{
"Key": "PSVersion",
"Value": {
"CLRVersion": "4.0.30319.42000",
"PSVersion": "5.1.14393.206",
"BuildVersion": "10.0.14393.206"
}
}]
}
I have tried manually generating the MOF and including it in the module, but that didn't help (or perhaps I did it wrong). Even though this is a class-based resource I added the MOF with the name of the class in a \DSCResources\ className \ classname .schema.mof file. I note that the one generated in the C:\windows\system32\dsc folder includes the version number, which mine does not. Perhaps that's the problem.
After the failed initial sync, the subsequent consistency check does pass, and the MOF is created at the location mentioned in the error message.
The class itself contains a function that calls Import-Module Azure.Storage which is installed on the machine by a different DSC resource, and has been installed at the point of the consistency check, but (obviously) not at the point the initial sync starts. The resource that installs the module is marked as a dependency of the class-resource in the configuration, but I think MOF generation must happen at the point the modules are deployed which is logically before the initial sync has run.
At least that's what I think is happening.
Would be grateful if anyone could instruct me on what can be done in this instance, and whether my assumptions (above) are correct? I can't seem to get any additional errors or telemetry from the MOF compilation process itself to see why the MOF compilation is failing.
#Mods I'm really not clear on what basis this would be downvoted - I don't think asking a question nobody can answer is really grounds for "punishment".
Posting an answer as nobody really had anything to contribute here and I appear to
have solved it on my own. I believe the issue is a matter of timing. The DSC dependent modules are delivered from the pull server and compiled before any of them are executed. The dependency of my class module on Azure.Storage meant that the .psm1 file couldn't be compiled (since the module didn't exist on the machine yet - it would be devlivered via a DSC resource at a later time).
Perhaps there is some mechanism that accounts for these dependencies in PS-based modules, or there is some leniency applied that isn't the case for class-based resources. That's still not clear.
After some experimentation I have begun creating and shipping the MOF files alongside the psm1 and psd1 file, rather than in the DSCResources... child folder as outlined in my question, and this appears to have resolved the issue.
Hopefully this helps someone and doesn't attract more downvotes.
When I browse to a gist on gist.github.com it displays a friendly name. E.g. for
https://gist.github.com/stuartleeks/1f4e07546db69b15ade2 it shows stuartleeks/baz
This seems to be determined by the first file that it shows in the list of files for the gist, but I was wondering whether there is any way to retrieve this via the API?
Not directly, but you can get with the Gist API the JSON information associated to a gist, reusing the id of your url:
GET /gists/:id
In your case: https://api.github.com/gists/1f4e07546db69b15ade2
It includes:
"files": {
"baz": {
"filename": "baz",
and:
"owner": {
"login": "stuartleeks",
That should be enough to infer the name stuartleeks/baz.
I have a build process running with CCNET v1.8.5. When the build completes, successfully or not I want to send a notification to Slack. I have the notification piece working with Slack's web hooks (see below) but I cannot seem to obtain the status of the current build via the CCNetIntegrationStatus property. The result of using $[$CCNetIntegrationStatus] in my config is always Unknown.
I have a hunch that the reason is that these CCNET integration properties (at least they way that I am declaring them) are processed and defined when the build starts. Of course, when the build starts the build status would be UNKNOWN.
I have also tried:
$[CCNetIntegrationStatus], the result is an empty/blank string.
$(CCNetIntegrationStatus), ccnet.config cannot be processed, variable not found
$CCNetIntegrationStatus, the result is the literal string $CCNetIntegrationStatus
Ultimately, what I want to achieve is to send an HTTP request (notification) once the build is complete that includes the current builds integration status, the prior builds integration status, the version number, and project name. How would I go about doing this?
Here is the sample configuration:
<!-- block definition to send slack notification -->
<cb:define name="SlackNotificationBlock">
<checkHttpStatus>
<httpRequest>
<method>POST</method>
<uri>$(slackUrl)</uri>
<body> {{
"username": "Build Bot",
"icon_emoji": ":build:",
"attachments": [
{{
"fallback": "Warning! A New Build Approaches!",
"pretext": "Warning! A New Build Approaches!",
"color": "#D00000",
"username": "Build Bot",
"icon_emoji": ":build",
"fields": [
{{
"title": "Project",
"value": "$[$CCNetProject]",
"short": true
}},
{{
"title": "Version",
"value": "$[$CCNetLabel]",
"short": true
}},
{{
"title": "Status",
"value": "$[$CCNetIntegrationStatus]",
"short": true
}},
{{
"title": "Last Status",
"value": "$[$CCNetLastIntegrationStatus]",
"short": true
}},
{{
"title": "Location",
"value": "$(appUrl)",
"short": false
}}
]
}}
]
}}
</body>
<timeout>5000</timeout>
</httpRequest>
</checkHttpStatus>
</cb:define>
<!-- my publishers block -->
<publishers>
<xmllogger/>
<cb:scope slackUrl="url-to-slack"
appUrl="application-url">
<cb:SlackNotificationBlock/>
</cb:scope>
</publishers>
First I had to chuckle at the variations you tried like $(CCNetIntegrationStatus]. I mean really? But your hunch is right. The ccnet config code is static. The only way to make it dynamic is to source stuff from a file that you change during the build. This can be done with the build label but not apparently with the httpRequest.
I would do the url stuff in Nant. That gives you more flexibility. You can call it whenever you want.
I found a solution via the CCNetSlackPublisher. It accomplishes via a custom task what I was attempting to do entirely within the CCNET configuration syntax. Specifically, by implementing a custom task CCNetSlackPublisher is able to correctly obtain the current integration status. It then send that along to my configured slack channel. An additional benefit is that my CCNET configuration is simpler as well!
I was never able to 100% confirm my hunch that the CCNetIntegrationStatus property is evaluated once when the build starts. However, given the behavior that I was seeing I believe this to be the case.