I modified the simple example 'favorite_number.py':
from prefect import flow
#flow
def my_favorite_function(myvar:str='1'):
print(f'\n\nLOCALS: {locals()}\n\n')
At the agent side, the output is :
Agent started! Looking for work from queue 'Agent queue semua'...
12:51:44.174 | INFO | prefect.agent - Submitting flow run '4f147b53-2db8-4762-9f51-f0031a8f7eb1'
12:51:44.257 | INFO | prefect.infrastructure.process - Opening process 'stalwart-sparrow'...
12:51:44.261 | INFO | prefect.agent - Completed submission of flow run '4f147b53-2db8-4762-9f51-f0031a8f7eb1'
12:51:47.574 | INFO | Flow run 'stalwart-sparrow' - Finished in state Completed()
LOCALS: {'myvar': '1'}
12:51:48.012 | INFO | prefect.infrastructure.process - Process 'stalwart-sparrow' exited cleanly.
I found that '4f147b53-2db8-4762-9f51-f0031a8f7eb1' is the flow_run id.
My question is : How to make that my_favorite_function print it's flow_run_id?
Actually I need that flow_run_id for another process outside prefect.
Sincerely
-bino-
Here is how you can get your favorite function to print the flow run ID in Prefect 2:
import prefect
from prefect import flow
#flow
def my_favorite_function():
run_id = prefect.context.get_run_context().flow_run.id
print(run_id)
if __name__ == "__main__":
my_favorite_function()
Related
i need to check if all the tasks of my dag were marked as successful so that in the last task of the dag it sends an email to me to notify if all were successful or if any failed.
here is a piece of code that i tried:
dag_runs = DagRun.find(dag_id=self.dagId)
for dag_run in dag_runs:
if dag_run.state == 'success':
body = f'\nHello , \nHere is the values for the pipeline {self.dagId}: \ncount of lines is {new_lines}, \nMax date is {new_date}. \nRegards!'
else:
body= f'\nHello \nYour dag {self.dagId} has been Failed'
email_text = """\
Subject: %s
\nFrom: %s
\nTo: %s
\n%s
""" % (subject, sent_from, self.to, body)
try:
smtp_server = smtplib.SMTP_SSL('smtp.gmail.com', 465)
smtp_server.ehlo()
smtp_server.login(self.gmail_user, self.gmail_password)
smtp_server.sendmail(sent_from, self.to, email_text)
smtp_server.close()
print ("Email sent successfully!")
except Exception as ex:
print ("Something went wrong….",ex)
i'm unable to check if the dag state is success. so i want to check if the state of all tasks is success
thanks for the help and advice in advance.
We had a similar use-case where we want to identify if all the tasks were sucessful. In Airflow, if a task fails and if we have a trigger_rule one_failed, the DAG can run ends up being marked a successful as there was a recovery from failure.
Solution we implemented with single email to track all the task_instances:
from airflow.models.dagrun import DagRun
from airflow.models.taskinstance import TaskInstance
def check_all_success(**context):
dr: DagRun = context["dag_run"]
ti: TaskInstance = context["ti"]
# here we remove the task currently executing this logic
ti_summary = set([task.state for task in dr.get_task_instances() if task.task_id != ti.task_id])
# Remove success state
ti_summary.remove('success')
# If TI summary had any other state except success, there was an issue in the run
if ti:
# Send email: All tasks in DAG: {dr.dag_id} did not complete successfully
pass
else:
# Send email: All tasks in DAG: {dr.dag_id} completed successfully
pass
check_all_tasks = PythonOperator(
task_id='check_all_tasks',
python_callable=check_all_success,
provide_context=True
)
By default, every task in Airflow should succeed for a next task to start running. So if your email-task is the last task in your DAG, that automatically means all previous tasks have succeeded.
Alternatively, you could configure on_success_callback and on_failure_callback on your DAG, which executes a given callable. This passes in arguments to determine whether the DAG run failed or succeeded:
def email(dagrun: DagRun, success: bool, reason: str, session: Session):
# send email here...
success is a boolean value which indicates DAG Run success/failure.
I need a fixture in my behave code so that all the users that I create during testing automatically gets cleaned up. As a result, I added the following code
#test/features/steps/environment.py
#fixture()
def user_cleanup(context):
# -- SETUP-FIXTURE PART:
context.users_to_be_cleaned_up = []
print ("Creating Fixture")
yield context.users_to_be_cleaned_up
# -- CLEANUP-FIXTURE PART:
for userid in context.users_to_be_cleaned_up:
resp = delete_database_entry("users", userid)
print (resp)
context.users_to_be_cleaned_up = []
def before_feature(context, feature):
if "fixture.user.cleanup" in feature.tags:
use_fixture(user_cleanup, context)
In my features file, I added the following
#fixture.user.cleanup
Feature: Validating backend from the app side
Scenario Outline: Super Admin has permission to create other users
Given a set of existing users:
| user | details |
| superadmin | userdetails |
When "superadmin" successfully logs in
Then he can create non-existing "<user>" with "<details>"
and "<user>" can login successfully with "<details>"
Examples: User Roles
| user | details |
| superadmin_1 | user details |
The idea was to have the test append all the users into context.users_to_be_cleaned_up. But in the test, when I try to append, it says that property users_to_be_cleaned_up is not present in context.
Any idea what I am doing wrong here?
I got an answer for this and recording this here for posterity.
You need to keep your environments.py at the feature level and not at the steps level.
So the structure as it stands today
|
|-test.feature
|_environment.py
|--steps
|
|- steps.py
I have been trying to set up alerts of a .NET Core App Service hosted in Azure to fire an event if X% of the requests are failing in the past 24 hours. I have also tried setting up an alert from the Service's AppInsights resource using the following metrics: Exception rate, Server exceptions, or Failed request.
However, none of these have the ability to capture a % (failure rate), all of them are using count as a metric.
Does anyone know a workaround for this?
Please try the query-based alert:
1.Go to application insights analytics, in the query editor, input below scripts:
exceptions
| where timestamp >ago(24h)
| summarize exceptionsCount = sum(itemCount) | extend t = ""| join
(requests
| where timestamp >ago(24h)
| summarize requestsCount = sum(itemCount) | extend t = "") on t
| project isFail = 1.0 * exceptionsCount / requestsCount > 0.5 // if fail rate is greater than 50%, fail
| project rr = iff(isFail, "Fail", "Pass")
| where rr == "Fail"
2.Then click the "New alert rule" on the upper right corner:
3.In the Create rule page, set as following:
I was looking for a way to avoid writing queries using something that is already built-in in app insights but in the end i also came up with something like yours solution using the requests instead:
requests
| summarize count()
| extend a = "a"
| join
(
requests
| summarize count() by resultCode
| extend a = "a"
)
on a
| extend percentage = (todouble(count_1)*100/todouble(count_))
| where resultCode == 200
| where percentage < 90 //percentage of success is less than 90%
| project percentage_of_failures = round(100- percentage,2), total_successful_req = count_, total_failing_req = count_ - count_1 , total_req = count_1
I'd like to use Azure Log Analytics to create a monitoring alert for possible brute-force attempts on my users' accounts. That is to say, I'd like to be notified by Azure (or, at the very least, be able to manually run the script to obtain the data) when a user's account is successfully authenticated into O365 following a number of failed attempts.
I know how to parse the logs to, for example, obtain the number of unsuccessful sign-in attempts by all users during a defined period (see the example below):
SigninLogs
| where TimeGenerated between(datetime("2018-11-19 00:00:00") .. datetime("2018-11-19 23:59:59"))
| where ResultType == "50074"
| summarize FailedSigninCount = count() by UserDisplayName
| sort by FailedSigninCount desc
But I don't know how to script the following:
A user has created 9 unsuccessful sign-in attempts (type 50074) and
created a successful sign-in attempt.
Within a 60-second period.
Any help would be gratefully received.
Check out the Azure Sentinel community GitHub and see if the queries there help. Specifically I added https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/SigninBruteForce-AzurePortal.txt which I think more or less does what you are after - also repasted below. Hope that helps.
// Evidence of Azure Portal brute force attack in SigninLogs:
// This query returns results if there are more than 5 authentication failures and a successful authentication
// within a 20-minute window.
let failureCountThreshold = 5;
let successCountThreshold = 1;
let timeRange = ago(1d);
let authenticationWindow = 20m;
SigninLogs
| where TimeGenerated >= timeRange
| extend OS = DeviceDetail.operatingSystem, Browser = DeviceDetail.browser
| extend StatusCode = tostring(Status.errorCode), StatusDetails = tostring(Status.additionalDetails)
| extend State = tostring(LocationDetails.state), City = tostring(LocationDetails.city)
| where AppDisplayName contains "Azure Portal"
// Split out failure versus non-failure types
| extend FailureOrSuccess = iff(ResultType in ("0", "50125", "50140"), "Success", "Failure")
| summarize StartTimeUtc = min(TimeGenerated), EndTimeUtc = max(TimeGenerated),
makeset(IPAddress), makeset(OS), makeset(Browser), makeset(City), makeset(ResultType),
FailureCount=countif(FailureOrSuccess=="Failure"),
SuccessCount = countif(FailureOrSuccess=="Success")
by bin(TimeGenerated, authenticationWindow), UserDisplayName, UserPrincipalName, AppDisplayName
| where FailureCount>=failureCountThreshold and SuccessCount>=successCountThreshold
I'm trying to write a Python script to talk to my instance of Jenkins. I am using the newest version of the jenkinsapi module and querying Jenkins 1.509.3.
I can get a job list like follows:
l=j.get_jobs_list()
where j is an instance of jenkinsapi.Jenkins (I used the requester from jenkinsapi.utils.requester to skip ssl verification)
However, when I try to get more information on an individual job with
j.get_job(l[0])
it fails with this error: Inappropriate content found at [some_address] and what is returned is a bunch of HTML (that looks like the starting page for my instance, the one you see when you log in) instead of anything that should look like the response. Pasting [some_address] into the browser gives me what I expect as a response.
While I can get some information on the Jenkins instance, what I am really interested in is info on individual jobs. Any ideas how to fix it and get the job info?
Using python 3.6, python-jenkins 1.0.1 and Jenkins 2.121.1, following works nicely:
import pprint
import jenkins
IP = 'localhost'
USERNAME = 'my_username'
PW = 'my_password'
def get_version(server):
user = server.get_whoami()
version = server.get_version()
print('Hello %s from Jenkins %s' % (user['fullName'], version))
def get_jobs(server):
jobs = server.get_jobs() # List[dict]
print("Here are top 5 jobs")
pprint(jobs[:5])
return jobs
def get_job(server, job_name):
job_config = server.get_job_config(job_name) # XML
job_info = server.get_job_info(job_name) # dict
print("\n --- JOB CONFIG --- ")
print(job_config)
print("\n --- JOB INFO --- ")
pprint(job_info)
if __name__ == "__main__":
_server = jenkins.Jenkins(IP, username=USERNAME, password=PW)
get_version(_server)
_jobs = get_jobs(_server)
get_job(_server, _jobs[0]['name'])
Jenkins API I was using is documented here: https://python-jenkins.readthedocs.io/en/latest/index.html