I want to get a list of issues that have been created in a specific date range using the Github enterprise api. What I want to do would be the equivalent of doing a search on the issues page as shown on the image below:
I have tried the following command: curl -H "Authorization: token myToken" "https://github.mydomain.com/api/v3/repos/owner/repo/issues?state=all&since=2015-09-01" > issues.json but that does not give me what i need because the parameter since according to the Api docs is described as:
Only issues updated at or after this time are returned. This is a timestamp in ISO 8601 format: YYYY-MM-DDTHH:MM:SSZ
Thanks in advance!
So after lots of googling and reading through the Github API docs I figured it out. What i needed for this was the Github Search API. The first thing i did was figure out what endpoints where available to me on my enterprise API as described in this stackoverflow post. So I used the following command to do that:
curl -H "Authorization: token [myToken]" "https://github.mydomain.com/api/v3/"
One of the endpoints returned in the response was:
"issue_search_url": "https://github.mydomain.com/api/v3/search/issues?q={query}{&page,per_page,sort,order}"
Using that endpoint, I constructed the following command that gave me what I needed:
curl -H "Authorization: token [myToken]" "https://github.mydomain.com/api/v3/search/issues?page=1&per_page=100&sort=created&order=asc&q=repo:[Owner]/[RepoName]+is:issue+created:>=2015-09-01"
Let's break down the parameters (anything after the ? sign):
page=1&per_page=100: The default number of results for this request is 30 per page. In my case I had 664 results. So I needed to do multiple request specifying which page (page=1) and how many results I wanted for that request (per_page=100) until i got all of them. In my case i did 7 request with the above url each time changing the page number. For more info see the Github docs on Pagination
&sort=created&order=asc: Sor by the created date in ascending order (oldest first). See Github Search API and Searching Issues
q=repo:[Owner]/[RepoName]+is:issue+created:>=2015-09-01: Form a search query (q=) that limits the search to issues (is:issue) created from 2015-09-01 and on (created:>=2015-09-01) in the repo Owner/Name (repo:[Owner]/[RepoName])
Hope this helps others as I have found that the Github api docs are not very clear.
Related
Trying to get all the branches under a project using GitLab API, but I can see only 20 branches are returned. How can I get the complete list of all the branches? I am using the following API.
curl --header "PRIVATE-TOKEN: <token>" "https://gitlab.com/api/v4/projects/1521/repository/branches"
Found the solution under pagination in the official Gitlab API documentation, by default we get 20 results, we can increase the number of results by using per_page in our API link as follows.
https://gitlab.com/api/v4/projects/<Project_id>/repository/branches?per_page=50
Getting branches is limited to 20 branches. In order to get all branches use query parameters like below:
https://gitlab.com/api/v4/projects/2009901/repository/branches/?page=2
https://gitlab.com/api/v4/projects/2009901/repository/branches/?per_page=100
Documentation can be found here:
https://docs.gitlab.com/ee/api/#pagination
This is missing in the Branches API documentation unfortunately:
https://docs.gitlab.com/ee/api/branches.html
I want to get a json file with all the projects that appear on my Dashboard and that I have developer access to. I though it would be a very simple job using the Gitlab API, more specifically the projects API, but so far, I can't. Is it possible?
Note : I generated a token with api scope.
Here's what I tried and the results :
curl --header "PRIVATE-TOKEN: XXXXX" https://gitlab.com/api/v3/projects
It gives me all the projects I don't own but I have master access to. It doesn't include the ones I have developer/other access to. The result is the closest to what I'm looking for so far.
curl --header "PRIVATE-TOKEN: XXXXX" https://gitlab.com/api/v4/projects
It gives me a list of projects I seem to have access to, even though I have no idea what they are.
curl --header "PRIVATE-TOKEN: XXXXX" https://gitlab.com/api/v4/users/myUser/projects
It gives me all the projects I own. I thought that one would work since the documentation says Get a list of visible projects for the given user. I also tried with the membership attribute set to true or false, but there is no difference.
Any advice/help would be appreciated!
I know this is old question, but maybe other will stumble over this as I just did.
My solution was adding the per_page option
curl -s -X GET -H 'Private-Token: ' 'https://gitlab.com/api/v4/projects?per_page=100'
I'm trying to get all of the pull requests for a given repo. The GitHub API paginates results such that you cannot get all the results at once. In the documentation, they say that getting all of the results will require knowing how many pages there are. They say you can learn how many pages there are by getting the Link response header, which you should be able to get with curl -I https://api.github.com/repos/rails/rails, for instance. But, while that works for the rails repository, it does not work for the repo that I need: /lodash/lodash. When I run the same command with lodash, I get:
curl -I https://api.github.com/repos/lodash/lodash/pulls
HTTP/1.1 200 OK
...
Access-Control-Expose-Headers: ETag, Link, X-GitHub-OTP, X-RateLimit-
Limit,...
...
In other words, Link is an Access-Control-Expose-Header for the lodash repository. I haven't been able to find any information on how to get it, given that.
So I believe the crux of my question is "How do I get an Access-Control-Expose-Header?" but I wanted to provide context in case there is another way of getting all pull requests.
As for today, there is no opened pull request for repository lodash, so you will have no result.
From Github API, default state is open when you retrieve pull requests :
Either open, closed, or all to filter by state. Default: open
Applying a filter that gives more pages will give you the Link header :
curl -I https://api.github.com/repos/lodash/lodash/pulls?state=all
The basic work-flow I am trying to implement is to generate a PDF from FileMaker data, upload it to DocuSign for signing, and download the signed document back to FileMaker.
The DocuSign API requires custom headers, so I cannot use the built-in FileMaker 13 Insert From URL script step. Instead, I am using the BaseElements plug-in BE_HTTP_Set_Custom_Header and BE_GetURL functions. I currently have the DocuSign Login API call working.
Now I am trying to use the DocuSign API to upload a document and request a signature. This requires a multi-part/form-data POST request. Unfortunately, neither the BaseElements nor Troi URL plug-ins support multipart/form-data. In fact, I cannot find any plug-in that does. Is anyone aware of a FileMaker plug-in that supports multipart/form-data POST?
https://www.docusign.com/developer-center/quick-start/request-signatures
According to a comment on the Goya support forum last week, the next version of the BaseElements plug-in should support pass-through to the curl command line utility. If true, then as an alternative it seems possible to write a curl command to build the proper request, but my HTTP and curl knowledge is limited. So far, I have been unable to get the DocuSign signature request example working in Terminal. Has anyone been able to upload a document and request a signature with a single curl command?
http://support.goya.com.au/discussions/free-baseelements-plugin/1088-be-plugin-and-http-file-upload
Finally, I would be grateful for any other ideas or suggestions for attacking this problem.
Thank you!
Yoo could use ScriptMaster and Groovy to write your own function to support the multipart/form-data type
It took me a whole week but I managed to get it working on FileMaker 14.
It took me a whole week but I did manage to integrate it completely. I studied with my CEO the various providers out there and we ended up choosing DocuSign because our clients would be able to sign/approve our Estimates from literally any device (and also providing us with some more information if we need) such as Credit Card and etc), and because the annual costs were very competitive.
On the Development side it was hard, but not impossible.
The steps to master it are as follows:
1 - Study REST, and HTTP requests fundamentals (two youtube videos will do the trick).
2 - Get familiar with the command Curl in order to make POST and GET requests from Terminal (on Mac). Once you get to this point. Then you can try to follow Docusign's steps to POST and GET from Mac Terminal (not through FileMaker as yet). The first command that worked for me is as follows:
curl -i -H "Accept: application/json" -H 'X-DocuSign-Authentication:{"Username": "myemail#hotmail.com","Password": "mypassword", "IntegratorKey": "fae5e715-dec2-477f-906e-b6300bc9d09a"}' -X GET https://demo.docusign.net/restapi/v2/accounts/8aabdb38-41ab-4fab-bae9-63a071394a7a
If you replace the text above in bold with your DocuSign Developer credentials/information you should get a HTTP Header 200 OK - which means that you are in the right way.
This is important because you need to understand what information should be in the Headers, allowing Docusign to accept your requests.
3 - Translate the same concept into FileMaker. This is the most tricky part. First you need to create two Custom Variables. One will be your Docusign authentication (email, password and Integration Key). Something like this:
enter image description here
and the other one will be your endPoint , which is simply your Docusign url followed by your account number.
enter image description here
4 - Now you need to install BaseElements plugin onto your FileMaker Database(you wouldn't need this if you were using FileMaker 16). This will allow you to send POST, GET, DELETE, and UPDATE requests to Docusign. Unfortunately BaseElements's documentation is not super so I struggled a little bit till a get an actual result, but I will try to break it down for you here.
In order to make a GET request similar to the example that we have made we need to make a one line Script which looks like:
enter image description here
and then, inside the Refresh Object function:
enter image description here
This Script was adapted from a BaseElements plugin used to do Vimeo HTTP GET Requests - please see the link for more details on how to use it and test it: FileMaker REST using BaseElements Plugin - ISO FileMaker Magazine .
I think this gives you pretty much a good idea of how it works.
Unfortunately Docusign still doesn't provide any documentation or support to FileMaker developers, and this is as far as the internet goes. So if you need to make POST requests, it will get a bit more complex since you will have to put your FileMaker fields in JSON format, BaseElements POST syntax changing the script from GET to POST accordingly and POST inserting one more command with your data:
// insert this on your FileMaker POST script after the Headers List end
~data = BE_HTTP_POST ( ~endpoint ; "{
\"documents\": [
{ all of your JSON code adapted to FileMaker }]}");
You will also have to turn the PDF files that you want to send into Base64 format.
If you do this trick you will be good to go.
I am trying to upload a file from linux to sharepoint with my sharepoint login credentials.
I use the cURL utility to achieve this. The upload is successful.
The command used is : curl --ntlm --user username:password --upload-file myfile.txt -k https://sharepointserver.com/sites/mysite/myfile.txt
-k option is used to overcome the certificate errors for the non-secure sharepoint site.
However, this uploaded file is showing up in "checked out" view(green arrow) in sharepoint from my login.
As a result, this file is non-existent for users from other logins.
My login has the write access previlege to sharepoint.
Any ideas on how to "check in" this file to sharepoint with cURL so that the file can be viewed from anyone's login ?
I don't have curl available to test right now but you might be able to fashion something out of the following information.
Check in and check out is handled by /_layouts/CheckIn.aspx
The page has the following querystring variables:
List - A GUID that identifies the current list.
FileName - The name of the file with extension.
Source - The full url to the allitems.aspx page in the library.
I was able to get the CheckIn.aspx page to load correctly just using the FileName and Source parameters and omitting the List parameter. This is good because you don't have to figure out a way to look up the List GUID.
The CheckIn.aspx page postbacks to itself with the following form parameters that control checkin:
PostBack - boolean set to true.
CheckInAction - string set to ActionCheckin
KeepCheckout - set to 1 to keep checkout and 0 to keep checked in
CheckinDescription - string of text
Call this in curl like so
curl --data "PostBack=true&CheckinAction=ActionCheckin&KeepCheckout=0&CheckinDescription=SomeTextForCheckIn" http://{Your Server And Site}/_layouts/checkin.aspx?Source={Full Url To Library}/Forms/AllItems.aspx&FileName={Doc And Ext}
As I said I don't have curl to test but I got this to work using the Composer tab in Fiddler 2
I'm trying this with curl now and there is an issue getting it to work. Fiddler was executing the request as a POST. If you try to do this as a GET request you will get a 500 error saying that the AllowUnsafeUpdates property of the SPWeb will not allow this request over GET. Sending the request as a POST should correct this.
Edit I am currently going through the checkin.aspx source in the DotPeek decompiler and seeing some additional options for the ActionCheckin parameter that may be relevant such as ActionCheckinPublish and ActionCheckinFromClientPublish. I will update this with any additional findings. The page is located at Microsoft.SharePoint.ApplicationPages.Checkin for anyone else interested.
The above answer by Junx is correct. However, Filename variable is not only the document filename and the extension, but should also include the library name. I was able to get this to work using the following.
Example: http://domain/_layouts/Checkin.aspx?Filename=Shared Documents/filename.txt
My question about Performing multiple requests using cURL has a pretty comprehensive example using bash and cURL, although it suffers from having to reenter the password for each request.