I have recently upgraded my Airflow version from v1.10.6 to v2.2.3(Latest version). I created a user with role - User.
airflow users create \
--role User \
--username DEVUSER \
--firstname DEV \
--lastname USER \
--email my_email#gmail.com
Password is devairflowuser
I'm trying to Trigger a new Dag Run using the below curl command :
curl -X POST 'http://localhost:8083/api/v1/dags/handling_migrations_task_request/dagRuns' -H 'Content-Type: application/json' --user "DEVUSER:devairflowuser" -d '{
}'
But, I get 401 error Unauthorized:
{
"detail": null,
"status": 401,
"title": "Unauthorized",
"type": "https://airflow.apache.org/docs/apache-airflow/2.2.3/stable-rest-api-ref.html#section/Errors/Unauthenticated"
}
In airflow.cfg
auth_backend = airflow.api.auth.backend.basic_auth
But, with Admin credentials I'm able to trigger a new Dag run.
From this - https://airflow.apache.org/docs/apache-airflow/stable/security/access-control.html#user. I can see that Role-User can also create a Dag run
(permissions.ACTION_CAN_CREATE, permissions.RESOURCE_DAG_RUN)
Any help is appreciated. Thanks!
Related
We have a container-based service running in AWS ECS with the front end hosted by AWS Cloudfront, and authorization handled by AWS Cognito. I'm trying to configure Wiremock to be a proxy for this service so I can record the calls and mappings to later use in unit tests for a client app I'm writing in python.
I'm running the Wiremock server in standalone mode, and have it proxying to calls to the url of our service. However, Cloudfront keeps returning either a 403-Bad Request error or 403-Forbidden error when I connect via Wiremock.
When I use curl, and pass all the correct headers (Content-Type: application/json, Authentication: Bearer ) it works just fine when I use https://myservice.example.com/api/foo. But as soon as I swap out "myservice.example.com" for "localhost:8000", I get the Cloudfront generated errors.
I'm guessing I have some mis-configuration where, despite passing the headers to Wiremock, I haven't properly told Wiremock to pass those headers on to "the service", which is really Cloudfront.
Not being a Java guy, I'm finding the Wiremock docs a little difficult to understand, and am trying to use the command-line arguments to configure Wiremock like this:
/usr/bin/java -jar \
./wiremock-jre8-standalone-2.35.0.jar \
--port=8000 \
--verbose \
--root-dir=test_data/wiremock \
--enable-browser-proxying \
--preserve-host-header \
--print-all-network-traffic \
--record-mappings \
--trust-proxy-target=https://myservice.example.com \
--proxy-all=https://myservice.example.com
Request:
$ curl -k -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer ${JWT}" \
http://127.0.0.1:8000/api/foo
Response:
<html>
<head><title>403 Forbidden</title></head>
<body>
<center><h1>403 Forbidden</h1></center>
<hr><center>CloudFront</center>
</body>
</html>
When using exactly the same curl command, but changing the URL to point directly at my service instead of the proxy, I get the response I expected (hoped for?) through the proxy:
curl -k -X GET -H "Content-Type: application/json" \
-H "Authorization: Bearer ${JWT}" \
https://myservice.example.com/api/foo
[
{
"id": "09d91ea0-7cb0-4786-b3fc-145fc88a1a3b",
"name": "foo",
"created": "2022-06-09T02:32:11Z",
"updated": "2022-06-09T20:08:43Z",
},
{
"id": "fb2b6454-4336-421a-bc2f-f1d588a78d12",
"name": "bar",
"created": "2022-10-05T06:23:24Z",
"updated": "2022-10-05T18:34:32Z",
}
]
Any help would be greatly appreciated.
Thanks.
We are trying to trigger an existing Azure Databricks Notebook using a REST API Call through the shell script. There are existing clusters running in the workspace. We want to attach the Databricks notebook with an existing cluster and trigger the Notebook
We are trying to figure out the configuration and the REST API Call that can trigger the notebook with a specific cluster dynamically at the run time.
I have reproduced the above and got the below results.
Here, I have created two clusters C1 and C2 and two notebooks Nb1 and Nb2.
My Nb1 notebook code for sample:
print("Hello world")
I have created Nb1 job and executed with C1 cluster by using the below shell script from Nb2 which is attached with C2.
%sh
curl -n --header "Authorization: Bearer <Access token>" \
-X POST -H 'Content-Type: application/json' \
-d '{
"run_name": "My Notebook run",
"existing_cluster_id": "<cluster id>",
"notebook_task":
{
"notebook_path": "<Your Notebook path>"
}
}' https://<databricks-instance>/api/2.0/jobs/runs/submit
Execution from Nb2:
Job Created:
Job Output:
Here is my dead simple flow:
from prefect import flow
import datetime
#flow
def firstflow(inreq):
retval={}
retval['type']=str(type(retval))
retval['datetime']=str(datetime.datetime.now())
print(retval)
return retval
I run prefect orion and prefect agent.
Make a trigger using web ui (deployments run) ... the agent succesfully pull and do the job.
My question is how to do the trigger using just curl?
Note : I already read http://127.0.0.1:4200/docs.
but my lame brain couldn't find how to do it.
note:
Lets say my flow id is : 7ca8a456-94d7-4aa1-80b9-64894fdca93b
Parameters I want to be processed is {'msg':'Hello world'}
blindly Tried with
curl -X POST -H 'Content-Type: application/json' http://127.0.0.1:4200/api/flow_runs \
-d '{"flow_id": "7ca8a456-94d7-4aa1-80b9-64894fdca93b", "parameters": {"msg": "Hello World"}, "tags": ["test"]}'
but prefect orion say
INFO: 127.0.0.1:53482 - "POST /flow_runs HTTP/1.1" 307 Temporary Redirect
Sincerely
-bino-
It's certainly possible to do it via curl but it might be painful especially if your flow has parameters. There's much easier way to trigger a flow that will be tracked by the backend API - run the flow Python script and it will have exactly the same effect. This is because the (ephemeral) backend API of Prefect 2.0 is always active in the background and all flow runs, even those started from a terminal, are tracked in the backend.
Regarding curl, it looks like you are missing the trailing slash after flow_runs. Changing your command to this one should work:
curl -X POST -H 'Content-Type: application/json' http://127.0.0.1:4200/api/flow_runs/ \
-d '{"flow_id": "7ca8a456-94d7-4aa1-80b9-64894fdca93b", "parameters": {"msg": "Hello World"}, "tags": ["test"]}'
The route which might be more helpful, though, is this one - it will create a flow run from a deployment and set it into a scheduled state - the default state is pending, which would cause the flow run to be stuck. This should work directly:
curl -X POST -H 'Content-Type: application/json' \
http://127.0.0.1:4200/api/deployments/your-uuid/create_flow_run \
-d '{"name": "curl", "state": {"type": "SCHEDULED"}}'
I am trying to disable and again re-enable the branch policy created for a branch using Azure DevOps REST API.
The Branch Policy that I have manually created:
Using CURL I was able to get the list of Branch Policies that has been created in the repository.
curl --url "https://dev.azure.com/{ORG}/{PROJ}/_apis/policy/configurations?api-version=6.0" --user "username:password" --request GET --header "Accept: application/json"
Output:
{
"count":1,
"value":[
{
"createdBy":{
"displayName":"Akshay B",
"url":"XXXX",
"_links":{
"avatar":{
"href":"XXXX"
}
},
"id":"XXXX-XXXX-XXXX-XXXX-e0fdec2c1636",
"uniqueName":"XXXX#XXXX.com",
"imageUrl":"XXXX",
"descriptor":"XXXX"
},
"createdDate":"2021-08-30T12:24:43.0238821Z",
"isEnabled":true,
"isBlocking":true,
"isDeleted":false,
"settings":{
"minimumApproverCount":2,
"creatorVoteCounts":false,
"allowDownvotes":false,
"resetOnSourcePush":false,
"requireVoteOnLastIteration":false,
"resetRejectionsOnSourcePush":false,
"blockLastPusherVote":false,
"scope":[
{
"refName":"refs/heads/master",
"matchKind":"Exact",
"repositoryId":"XXXX-XXXX-XXXX-XXXX-cd2a5c3167b3"
}
]
},
"isEnterpriseManaged":false,
"_links":{
"self":{
"href":"XXXX"
},
"policyType":{
"href":"XXXX"
}
},
"revision":1,
"id":2,
"url":"XXXX",
"type":{
"id":"XXXX-XXXX-XXXX-XXXX-4906e5d171dd",
"url":"XXXX",
"displayName":"Minimum number of reviewers"
}
}
]
}
Now I am trying to disable the policy created above using the below CURL command:
curl --url "https://dev.azure.com/{ORG}/{PROJ}/_apis/policy/configurations/2?api-version=6.0" --user "username:password" --request PUT --header "Content-Type: application/json" --data '{\"isEnabled\":false}'
But I end up with the error:
{"$id":"1","innerException":null,"message":"TF400898: An Internal Error Occurred. Activity Id: xxxx-xxxx-xxxx-xxxx-70e5364888b7.","typeName":"Newtonsoft.Json.JsonReaderException, Newtonsoft.Json","typeKey":"JsonReaderException","errorCode":0,"eventId":0}
Is there anything I am missing out in the JSON data to be passed for the PUT method?
There are many branch policies (reviews, builds etc.) and every policy the behavior is different.
For reviewers policy you can use the DELETE API:
https://dev.azure.com/{org}/{project}/_apis/policy/Configurations/{policy-id}?api-version=6.0
In curl the --request should be DELETE.
You can get the policy-id with the GET api you did.
I am trying to setup oauth2-proxy to authenticate against microsofts german azure cloud. It's quite a ride, but I got as far as being able to do the oauth handshake. However, I am getting an error when trying to receive user mail and name via the graph API.
I run the proxy within docker like this:
docker run -it -p 8081:8081 \
--name oauth2-proxy --rm \
bitnami/oauth2-proxy:latest \
--upstream=http://localhost:8080 \
--provider=azure \
--email-domain=homefully.de \
--cookie-secret=super-secret-cookie \
--client-id=$CLIENT_ID \
--client-secret="$CLIENT_SECRET" \
--http-address="0.0.0.0:8081" \
--redirect-url="http://localhost:8081/oauth2/callback" \
--login-url="https://login.microsoftonline.de/common/oauth2/authorize" \
--redeem-url="https://login.microsoftonline.de/common/oauth2/token" \
--resource="https://graph.microsoft.de" \
--profile-url="https://graph.microsoft.de/me"
Right now it's stumbling upon the profile url (which is used to retrieve the identity of the user loggin in)
The log output is this:
2019/01/28 09:24:51 api.go:21: 400 GET https://graph.microsoft.de/me {
"error": {
"code": "BadRequest",
"message": "Invalid request.",
"innerError": {
"request-id": "1e55a321-87c2-4b85-96db-e80b2a5af1a3",
"date": "2019-01-28T09:24:51"
}
}
}
I would REALLY appreciate suggestions about what I am doing wrong here? So far the documentation has not been really helpful to me. It seems that things are slighly different in the german azure cloud, but documentation is pretty thin on that. The fact that the azure docs only describe the US cloud where all urls are different (not in a very logical sense unfortunately) makes things a lot harder...
Best,
Matthias
the issue was that the profile url https://graph.microsoft.de/me was incorrect.
While https://graph.microsoft.com/me is valid for the US cloud, the german cloud requires the version embedded in the URL like this:
https://graph.microsoft.de/v1.0/me.
This worked for me:
docker run -it -p 8081:8081 \
--name oauth2-proxy --rm \
bitnami/oauth2-proxy:latest \
--upstream=http://localhost:8080 \
--provider=azure \
--email-domain=homefully.de \
--cookie-secret=super-secret-cookie \
--client-id=$CLIENT_ID \
--client-secret="$CLIENT_SECRET" \
--http-address="0.0.0.0:8081" \
--redirect-url="http://localhost:8081/oauth2/callback" \
--login-url="https://login.microsoftonline.de/common/oauth2/authorize" \
--redeem-url="https://login.microsoftonline.de/common/oauth2/token" \
--resource="https://graph.microsoft.de" \
--profile-url="https://graph.microsoft.de/v1.0/me"