I have a doubt regarding how to tag the cluster of the job cluster in databricks via api.
I know that I can already tag a cluster and the job, but I wanted to tag the cluster of the cluster job, is this possible?
I tried to use the "jobs/update" endpoint to insert the tag in the job cluster and even inserting these fields it still gets the same error:
Example of request:
curl --location --request POST 'https://databricks.com/api/2.0/jobs/update' \
--header 'Authorization: Bearer token' \
--header 'Content-Type: application/json' \
--data-raw '{
"job_id": 123456789,
"new_settings": {
"job_clusters": [
{
"job_cluster_key": "test",
"new_cluster": {
"custom_tags": {"test": "123"}
}
}
]
}
}'
I want to tag the resource (cluster) within the cluster job, is it possible via api? Has anyone performed this action?
If you look into Jobs API documentation, you may see that you need to provide all configuration parameters for block that you're updating, not only changed nested fields. Basically, this API allows you to change only top-level fields of the job, and do it completely, not nested fields.
P.S. I'll ask docs team to provide clarification there.
Related
In a multi geo environment I would like to execute a SharePoint REST API search and limit it to some geo locations only as described in the microsoft multi geo documentation. I have tried the GET as well as POST requests, but all settings in my MultiGeoSearchConfiguration are ignored, I am always getting the full result list from all geo locations.
What am I doing wrong here? Is it maybe the missing sourceId that I have now idea of where to find?
curl --location -g --request GET 'https://<mydev>.sharepoint.com/_api/search/query?querytext=%27test%27&ClientType=%27cb991e32-6ce4-4e98-a91b-4eea9a874962%27&Properties=%27EnableMultiGeoSearch:true,%20MultiGeoSearchConfiguration:[{DataLocation\:%22EUR%22\,Endpoint\:%22https\://<mydev>EUR.sharepoint.com%22}]%27' --header 'Accept: application/json' --header 'Authorization: Bearer ...'
(<mydev> is exchanged for my real sharepoint of course)
In your API request, you have the EnableMultiGeoSearch:true and according to the documentation provided, it mentions that if this parameter is set to true, "the query shall be fanned out to the indexes of other geo locations of the multi-geo tenant", have you tried setting this value to false?
I have tried Create, batchUpdate, get from https://developers.google.com/docs/api/how-tos/overview.
Even in batchUpdate I don't see option to edit title. I have used this to edit the contents of the document - insert/delete, but not title.
How can I edit a document title given that I have document id?
Is there any way to edit document title using API?
I believe your goal as follows.
You want to modify the title of Google Document.
Namely, you want to modify the filename of Google Document.
For this, how about this answer?
In order to modify the filename of Google Document, you can achieve this using the method of "Files: update" in Drive API. Unfortunately, Google Docs API cannot be used for modifying the filename of Google Document.
Endpoint:
PATCH https://www.googleapis.com/drive/v3/files/fileId
Sample request body:
{"name":"updated title"}
Sample curl:
curl --request PATCH \
'https://www.googleapis.com/drive/v3/files/{documentId}' \
--header 'Authorization: Bearer [YOUR_ACCESS_TOKEN]' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json' \
--data '{"name":"updated title"}' \
--compressed
Reference:
Files: update in Drive API v3
Added:
When you want to request using the browser, you can use this quickstart for authorization and Files: update for modifying the title. As a sample script, I show you the sample script of the method of Files: update for Javascript as follows. Please use the authorization script from the quickstart.
Sample script:
gapi.client.drive.files.update({
fileId: fileId,
resource: {name: "updated title"},
}).then((response) => {
console.log(response);
});
For example I want to receive only project names:
https://gitlab.com/api/v4/groups/:id/projects?fields=name
Is that possible?
That's not possible in the REST API. But, GitLab is working on GraphQL support, and you'd be able to express that in GraphQL.
https://docs.gitlab.com/ee/api/graphql/
To iterate on Vonc & King Chung Huang 's answers, using the following GraphQL query, you can get only the name field of the projects inside your group :
{
group(fullPath: "your_group_name") {
projects {
nodes {
name
}
}
}
}
You can go to the following URL : https://$gitlab_url/-/graphql-explorer and past the above query
Using curl & jq :
gitlab_url=<your gitlab host>
access_token=<your access token>
group_name=<your group name>
curl -s -H "Authorization: Bearer $access_token" \
-H "Content-Type:application/json" \
-d '{
"query": "{ group(fullPath: \"'$group_name'\") { projects { nodes { name }}}}"
}' "https://$gitlab_url/api/graphql" | jq '.'
GitLab 1.11 (May 2019) has now introduced a "basic support for group GraphQL queries "
GraphQL APIs allows users to request exactly the data they need, making it possible to get all required data in a limited number of requests.
In this release, GitLab is now supporting basic group information support in the GraphQL API.
See Issue 60786 and documentation: "Available queries"
A first iteration of a GraphQL API includes the following queries
project : Within a project it is also possible to fetch a mergeRequest by IID.
group : Only basic group information is currently supported.
How can I get the repo-language and topic for a Github Repo through the REST API. I looked at the documentation under - https://developer.github.com/v3/ , but could not get an answer.
Just found one way to do the search for topics as follows. Of course it needs a custom media type at moment.
From the documentation:
curl -H "Authentication: token TOKEN" \
-H "Accept: application/vnd.github.mercy-preview+json" \
https://api.github.com/search/repositories?q=topic:ruby+topic:rails
For the language, it is simply the URL like :
https://api.github.com/repos/postgres/pgadmin4/languages
When downloading Event Logs, is it possible to get them using the API instead of downloading them via the Download CSV button on a web browser?
Is there an API for which it is possible among the URLs below?
https://developer.yahoo.com/flurry/docs/api/code/analyticsapi/
Also, if you plan to add it in the future, please let me know when it is scheduled for completion.
I appreciate your assistance.
There is no API for get Event Logs(raw data) as far as I know.
Workaround:
Downloading Event Logs CSV can be done something like
this with some additional touch. That implementation is for previous version.
After Flurry's renovation at 3/27/2017,
Log in via GET /auth/v1/session with credentials
Get 'flurry-auth-token' from GET /auth/v1/authorize
Call GET ../eventLogCsv with 'flurry-auth-token' to download CSV
I'm a user of Flurry. And hope they support this feature via API soon.
As of moment of writing, Flurry now provides Raw Data Download API, so you can retrive your raw event's data on a periodic basis (but within some limitations - time windows must be less than 1 month, data preparation takes some time, etc.)
Simplified workflow is following:
1. Setup
First of all, you must generate a Programmatic Token (discrabed here https://developer.yahoo.com/flurry/docs/api/code/apptoken/, process is straightforward, except that you'll need to create another user with different role in order to use this token)
2. Making the Request
Specify startTime/endTime for desired time window inside request (within other parameters):
curl -X POST https://rawdata.flurry.com/pulse/v1/rawData
-H 'accept: application/vnd.api+json'
-H 'authorization: Bearer ~~YOUR TOKEN~~'
-H 'cache-control: no-cache'
-H 'content-type: application/vnd.api+json'
-d '{"data": {
"type": "rawData",
"attributes": {
"startTime": "1511164800000",
"endTime": "1511251199000",
"outputFormat": "JSON",
"apiKey": "AAAA1111BBBB2222CCCC"
}
}
}'
If your request was successful (requestStatus equals Acknowledged inside response body), save the id value from response.
3. Checking for data preparation status
Depending on complexity of your app and requested time window, data preparation make take about 30 minutes up to a few hours to be prepared.
You can check status by using:
curl -g https://rawdata.flurry.com/pulse/v1/rawData/26?fields[rawData]=requestStatus,s3URI
-H ‘accept: application/vnd.api+json;’
-H ‘authorization: Bearer ~~YOUR TOKEN~~’
-H ‘cache-control: no-cache’
-H ‘content-type: application/vnd.api+json;’
As soon as your data is ready, response will be following:
{
"data":{
"type":"rawData",
"id":"26",
"attributes":{
"requestStatus":"Success",
"s3URI":"https://flurry-rdd.s3.amazonaws.com/downloads/26.JSON.gz?AWSAccessKeyId=AAAA1111BBBB2222CCCC&Expires=1513101235&Signature=h%2FChXRi5QwmvhUrkpwq2nVKf8sc%3D"
}
}
}
Save s3URI for next step.
4. Retrieving Results
Now you can retrieve archived raw data by using s3URI:
curl -O https://flurry-rdd.s3.amazonaws.com/downloads/26.JSON.gz?AWSAccessKeyId=AAAA1111BBBB2222CCCC&Expires=1513039053&Signature=xbKNnTgpv1odAfVgPRLMyck8UnE%3D
Source: https://developer.yahoo.com/flurry/docs/analytics/rdd/