Fixture in behave is not getting applied - python-3.x

I need a fixture in my behave code so that all the users that I create during testing automatically gets cleaned up. As a result, I added the following code
#test/features/steps/environment.py
#fixture()
def user_cleanup(context):
# -- SETUP-FIXTURE PART:
context.users_to_be_cleaned_up = []
print ("Creating Fixture")
yield context.users_to_be_cleaned_up
# -- CLEANUP-FIXTURE PART:
for userid in context.users_to_be_cleaned_up:
resp = delete_database_entry("users", userid)
print (resp)
context.users_to_be_cleaned_up = []
def before_feature(context, feature):
if "fixture.user.cleanup" in feature.tags:
use_fixture(user_cleanup, context)
In my features file, I added the following
#fixture.user.cleanup
Feature: Validating backend from the app side
Scenario Outline: Super Admin has permission to create other users
Given a set of existing users:
| user | details |
| superadmin | userdetails |
When "superadmin" successfully logs in
Then he can create non-existing "<user>" with "<details>"
and "<user>" can login successfully with "<details>"
Examples: User Roles
| user | details |
| superadmin_1 | user details |
The idea was to have the test append all the users into context.users_to_be_cleaned_up. But in the test, when I try to append, it says that property users_to_be_cleaned_up is not present in context.
Any idea what I am doing wrong here?

I got an answer for this and recording this here for posterity.
You need to keep your environments.py at the feature level and not at the steps level.
So the structure as it stands today
|
|-test.feature
|_environment.py
|--steps
|
|- steps.py

Related

How to get details of per function app in Application Insights

I'm running python v3 function app and it contains multiple functions with different bindings(cosmos, blob, http etc). I'm trying to get the details of this function app in application insights like no of request, exception raised during execution or number of request per function app and per function etc.
I'm able to run and get few details like request count. Now I'm trying to map request details with other tables like exceptions, request etc but not able to map and drill down to the particular function.
For e.g Let suppose I have 10 function in function app and they run one after another based on output of previous function. Let say in any case flow got failed at any function. Now I want at which step/function my function app failed, details of error, successful and unsuccessful flow completion of function app
Below are the some query I have used for monitoring purpose.
Request on first function to get the total number of request counts for function app.
requests
| where timestamp > ago(1d)
| where operation_Name =~ "function name"
| summarize RequestsCount=sum(itemCount) by cloud_RoleName,bin(timestamp,1d)
Request and Average Duration of functions
requests
| summarize RequestsCount=sum(itemCount), AverageDuration=avg(duration) by operation_Name
| order by RequestsCount desc
You can check the exception per function with:
exceptions
| extend OperationName = iff(operation_Name == "","[No operation name]",operation_Name)
| summarize Count = count() by cloud_RoleName, OperationName, type, method
To join with requests:
requests
| where timestamp > ago(24h) and success == false
| join kind= inner (
exceptions
| where timestamp > ago(24h)
) on operation_Id
| project exceptionType = type, failedMethod = method, requestName = name, requestDuration = duration, success
Keep it mind, if you catch an error yourself, the result of the function will be success.
You could also work with custom error logs in your functions where you maybe create a json object which will end up in the message column of the traces table. You can query further then:
traces
| where message contains "the error i am searching for"
| extend json = parse_json(message)
| project
timestamp,
errorSource = json.error_source,
step = json.step,
errors = json.errors,
url = json.url

Apache Beam / Dataflow pub/sub side input with python

I'm new to Apache Beam, so I'm struggling a bit with the following scenario:
Pub/Sub topic using Stream mode
Transform to take out customerId
Parallel PCollection with Transform/ParDo that fetches data from Firestore based on the "customerId" received in the Pub/Sub Topic (using Side Input)
...
The ParDo transform that tries to fetch Firestore data does not run at all. If using "customerId" fixed value everything works as expected ... although not using a proper Fetch from Firestore (simple ParDo), it works. Am I doing something that is not supposed to?
Including my code bellow:
class getFirestoreUsers(beam.DoFn):
def process(self, element, customerId):
print(f'Getting Users from Firestore, ID: {customerId}')
# Call function to initialize Database
db = intializeFirebase()
""" # get customer information from the database
doc = db.document(f'Customers/{customerId}').get()
customer = doc.to_dict() """
usersList = {}
# Get Optin Users
try:
docs = db.collection(
f'Customers/{customerId}/DevicesWiFi_v3').where(u'optIn', u'==', True).stream()
usersList = {user.id: user.to_dict() for user in docs}
except Exception as err:
print(f"Error: couldn't retrieve OPTIN users from DevicesWiFi")
print(err)
return([usersList])
Main code
def run(argv=None):
"""Build and run the pipeline."""
parser = argparse.ArgumentParser()
parser.add_argument(
'--topic',
type=str,
help='Pub/Sub topic to read from')
parser.add_argument(
'--output',
help=('Output local filename'))
args, pipeline_args = parser.parse_known_args(argv)
options = PipelineOptions(pipeline_args)
options.view_as(SetupOptions).save_main_session = True
options.view_as(StandardOptions).streaming = True
p = beam.Pipeline(options=options)
users = (p | 'Create chars' >> beam.Create([
{
"clientMac": "7c:d9:5c:b8:6f:38",
"username": "Louis"
},
{
"clientMac": "48:fd:8e:b0:6f:38",
"username": "Paul"
}
]))
# Get Dictionary from Pub/Sub
data = (p | 'Read from PubSub' >> beam.io.ReadFromPubSub(topic=args.topic)
| 'Parse JSON to Dict' >> beam.Map(lambda e: json.loads(e))
)
# Get customerId from Pub/Sub information
PcustomerId = (data | 'get customerId from Firestore' >>
beam.ParDo(lambda x: [x.get('customerId')]))
PcustomerId | 'print customerId' >> beam.Map(print)
# Get Users from Firestore
custUsers = (users | 'Read from Firestore' >> beam.ParDo(
getFirestoreUsers(), customerId=beam.pvalue.AsSingleton(PcustomerId)))
custUsers | 'print Users from Firestore' >> beam.Map(print)
In order to avoid errors for running the function I had to initialise "users" dictionary, which I completely ignore aftewards.
I suppose I have several errors here, so your help is much appreciated.
It's not clear to me how users PCollection is used (since element is not processed in the process definition) in the example code. I've re-arranged the code a little bit with windowing and used the customer_id as the main input.
class GetFirestoreUsers(beam.DoFn):
def setup(self):
# Call function to initialize Database
self.db = intializeFirebase()
def process(self, element):
print(f'Getting Users from Firestore, ID: {element}')
""" # get customer information from the database
doc = self.db.document(f'Customers/{element}').get()
customer = doc.to_dict() """
usersList = {}
# Get Optin Users
try:
docs = self.db.collection(
f'Customers/{element}/DevicesWiFi_v3').where(u'optIn', u'==', True).stream()
usersList = {user.id: user.to_dict() for user in docs}
except Exception as err:
print(f"Error: couldn't retrieve OPTIN users from DevicesWiFi")
print(err)
return([usersList])
data = (p | 'Read from PubSub' >> beam.io.ReadFromPubSub(topic=args.topic)
| beam.WindowInto(window.FixedWindow(60))
| 'Parse JSON to Dict' >> beam.Map(lambda e: json.loads(e)))
# Get customerId from Pub/Sub information
customer_id = (data | 'get customerId from Firestore' >>
beam.Map(lambda x: x.get('customerId')))
customer_id | 'print customerId' >> beam.Map(print)
# Get Users from Firestore
custUsers = (cutomer_id | 'Read from Firestore' >> beam.ParDo(
GetFirestoreUsers())
custUsers | 'print Users from Firestore' >> beam.Map(print)
From your comment:
the data needed (customerID first and customers data after) is not ready when running the "main" PCollection with original JSON data from Pub/Sub
Did you mean the data in firestore is not ready when reading the Pub/Sub topic?
You can always split the logic into 2 pipelines in your main function and run them one after another.

Azure Web App Service trigger alert if X% of the requests fail

I have been trying to set up alerts of a .NET Core App Service hosted in Azure to fire an event if X% of the requests are failing in the past 24 hours. I have also tried setting up an alert from the Service's AppInsights resource using the following metrics: Exception rate, Server exceptions, or Failed request.
However, none of these have the ability to capture a % (failure rate), all of them are using count as a metric.
Does anyone know a workaround for this?
Please try the query-based alert:
1.Go to application insights analytics, in the query editor, input below scripts:
exceptions
| where timestamp >ago(24h)
| summarize exceptionsCount = sum(itemCount) | extend t = ""| join
(requests
| where timestamp >ago(24h)
| summarize requestsCount = sum(itemCount) | extend t = "") on t
| project isFail = 1.0 * exceptionsCount / requestsCount > 0.5 // if fail rate is greater than 50%, fail
| project rr = iff(isFail, "Fail", "Pass")
| where rr == "Fail"
2.Then click the "New alert rule" on the upper right corner:
3.In the Create rule page, set as following:
I was looking for a way to avoid writing queries using something that is already built-in in app insights but in the end i also came up with something like yours solution using the requests instead:
requests
| summarize count()
| extend a = "a"
| join
(
requests
| summarize count() by resultCode
| extend a = "a"
)
on a
| extend percentage = (todouble(count_1)*100/todouble(count_))
| where resultCode == 200
| where percentage < 90 //percentage of success is less than 90%
| project percentage_of_failures = round(100- percentage,2), total_successful_req = count_, total_failing_req = count_ - count_1 , total_req = count_1

Azure log analytics: monitoring successful sign-ins following repeated sign-in failures

I'd like to use Azure Log Analytics to create a monitoring alert for possible brute-force attempts on my users' accounts. That is to say, I'd like to be notified by Azure (or, at the very least, be able to manually run the script to obtain the data) when a user's account is successfully authenticated into O365 following a number of failed attempts.
I know how to parse the logs to, for example, obtain the number of unsuccessful sign-in attempts by all users during a defined period (see the example below):
SigninLogs
| where TimeGenerated between(datetime("2018-11-19 00:00:00") .. datetime("2018-11-19 23:59:59"))
| where ResultType == "50074"
| summarize FailedSigninCount = count() by UserDisplayName
| sort by FailedSigninCount desc
But I don't know how to script the following:
A user has created 9 unsuccessful sign-in attempts (type 50074) and
created a successful sign-in attempt.
Within a 60-second period.
Any help would be gratefully received.
Check out the Azure Sentinel community GitHub and see if the queries there help. Specifically I added https://github.com/Azure/Azure-Sentinel/blob/master/Detections/SigninLogs/SigninBruteForce-AzurePortal.txt which I think more or less does what you are after - also repasted below. Hope that helps.
// Evidence of Azure Portal brute force attack in SigninLogs:
// This query returns results if there are more than 5 authentication failures and a successful authentication
// within a 20-minute window.
let failureCountThreshold = 5;
let successCountThreshold = 1;
let timeRange = ago(1d);
let authenticationWindow = 20m;
SigninLogs
| where TimeGenerated >= timeRange
| extend OS = DeviceDetail.operatingSystem, Browser = DeviceDetail.browser
| extend StatusCode = tostring(Status.errorCode), StatusDetails = tostring(Status.additionalDetails)
| extend State = tostring(LocationDetails.state), City = tostring(LocationDetails.city)
| where AppDisplayName contains "Azure Portal"
// Split out failure versus non-failure types
| extend FailureOrSuccess = iff(ResultType in ("0", "50125", "50140"), "Success", "Failure")
| summarize StartTimeUtc = min(TimeGenerated), EndTimeUtc = max(TimeGenerated),
makeset(IPAddress), makeset(OS), makeset(Browser), makeset(City), makeset(ResultType),
FailureCount=countif(FailureOrSuccess=="Failure"),
SuccessCount = countif(FailureOrSuccess=="Success")
by bin(TimeGenerated, authenticationWindow), UserDisplayName, UserPrincipalName, AppDisplayName
| where FailureCount>=failureCountThreshold and SuccessCount>=successCountThreshold

How can I search and return the values and pass it to the method from spock table

Currently implementing GEB,Spock,Groovy. I come across the scenario like
There is a set of data's in the spock table. I have to pass the modulename as a parameter, Search from the spock table then return two values user id and password. Below code is skeleton code
My question is how to search module name based on parameter?
How to return two data's ?
Class Password_Collection extends Specification {
def "Secure password for search and Data Driven"(String ModuleName) {
expect:
// Search based on modulename in where
// pick the values and return the picked data
where:
Module | User_Name | Pass_word
login_Pass | cqauthor1 | SGVsbG8gV29ybGQ =
AuthorPageTest_Pass | cqauthor2 | DOIaRTd35f3y4De =
PublisherPage_pass | cqaauthor3 | iFK95JKasdfdO5 ==
}
}
If you provide the code it would be great help to learn and imeplement.
You don't need to search the table yourself or pick that data. Spock will do that automatically for you
In the expect: block just write your unit test that uses Module, User_Name and Pass_word. Spock will automatically run the test 3 times (as many as the rows of the table) passing each row in turn to your test.
Remove the argument ModuleName from the test method. It is not needed.
I suggest you read the Spock documentation on Data Driven tests a bit more.
class YourSpec extends Specification {
def "Secure password for search and Data Driven"(Module, User_Name, Pass_Word) {
expect:
classUnderTest.getUserNameForModule(Module) == User_Name
classUnderTest.getPasswordForModule(Module) == Pass_Word
where:
Module | User_Name | Pass_word
login_Pass | cqauthor1 | SGVsbG8gV29ybGQ =
AuthorPageTest_Pass | cqauthor2 | DOIaRTd35f3y4De =
PublisherPage_pass | cqaauthor3 | iFK95JKasdfdO5 ==
}
}
What Spock will do is run your test one time for each row in the data table from the "where" block, passing Module, User_Name, Pass_Word as parameters and assert your expectations in the "expect" block.
Please refer to Spock Data Driven Testing documentation for more details.

Resources