Gatling: exclude preparation request from reports - performance-testing

Here is my test scenario on Gatling:
val createTemplatesScenario = scenario("Template creation")
.feed(userFeeder)
.exec(doLogin) // populates access token in the session
.exec(doListProviders)
.exec(doCreateTemplate)
...
.exec(doDeleteTemplate)
And I want to exclude Login request from reports because sometimes it can take too much time on our system and affects all the metrics:
Is there a way to "prepare" test scenario so that only necessary actions will be taken into account?

You can use groups of requests.
Add all your requests except Login to a group.
As mentioned in the documentation:
If your scenario contains groups, this panel becomes a tree : each group is a non leaf node, and each request is a descendant leaf of a group. Group timings are by default the cumulated response times of all elements inside the group. Group duration can be displayed instead of group cumulated response time by editing the gatling.conf file.
The Global Information node should still be affected by Login request, but the node for your group of all other requests should contain unaffected aggregate results.

Related

Share Variables Within JMeter Thread Group

I have a thread group, which runs multiple threads concurrently.
Each thread makes a request using an ID from a csv file.
So different threads within the thread group can end up making a request with the same ID over time.
I want to use the cookie that is returned, for the specific ID in the request, even though its made by a different thread.
At the moment I have a Regular Expression Extractor pulling the cookie value, which created a variable based on its ID, for example, where ID is 56789 and the cookie is 1234, the variable would be 56789_1234.
I then use ${__V(${id}_g1)} to pull the cookie, associated with a specific ID for another request.
(Essentially a bunch of variables are created, prefixed with the ID and the last returned cookie value, and each subsequent request can then use the ID for its request to pull out the correct cookie)
And then create the cookie as so:
import org.apache.jmeter.protocol.http.control.CookieManager;
import org.apache.jmeter.protocol.http.control.Cookie;
CookieManager manager = sampler.getCookieManager();
Cookie AWSALB = new Cookie("AWSALB","${cookieVal}","domain","path",false,Long.MAX_VALUE);
manager.add(AWSALB);
(I assign ${__V(${id}_g1)} to 'cookieVal' using jp#gc - Set Variables Action)
However, I still cant share the range of variables that are being created amongst all threads.
I've tried properties, but I believe it only works between Thread Groups, and if the groups run consecutively.
I'd like all threads within the group to be able to read all the variables extracted by other threads.
You can "stick" each thread (virtual user) to its own ID (or set of IDs from the CSV), i.e. use separate files for each user and __CSVRead() function or single file and __groovy() function to read the values
If you still want to continue with your approach take a look at Inter-Thread Communication Plugin which provides a FIFO queue
Another way is using JMeter Properties if form of name-value pairs of ID=cookie_value, if there is no cookie value for the ID - write the value into the property, if there is - read the value from the property instead of requesting a new cookie
Although trying properties, I didn't seem to get it to work. Even though the properties were created (as identified in Debug Postprocessor), I still couldn't access them between threads for some reason.
However I solved this by using Beanshell Samplers before and after the request, writing out the single cookie value to an individual txt file per ID, with the txt files name being the ID. Each time any thread makes a request with this ID, it then updates the value in the corresponding txt file, and then before each request, reads the specific txt file for that ID to retrieve the last returned cookie.
UPDATE
As the number of threads increased, so did the possibility of threads becoming blocked when trying to access/write to the same .txt file at the same time.
I switched to using __CSVRead(), which so far has worked well.

Github checks API vs check runs vs check suites

I want to understand Github Checks API so that I can use them to retrieve data. On following Github documentation https://docs.github.com/en/rest/guides/getting-started-with-the-checks-api I am able to derive that check runs are associated with the sha of the change and at each commit on branch check suite is created. Checks API helps in getting all this information. But I want more clarity on three of them in terms of differences. Can anyone please explain these three terms using simple example and terms?
So, first of all, the GitHub Commit Statuses API is separate from the GitHub
Checks API (includes suites & runs), so let's look at them individually first then I'll explain the differences.
Before we get into it, I want to differentiate a PR Check from a Check Run, to avoid confusion. A PR Check describes the current state (trying not to say status here 😉) of a given job or task running in CI or elsewhere on a specific PR commit. These can be created via either a Commit Status or a Check Run. All the items in the pink box below are PR Checks, notice the Hide all checks button.
GitHub statuses - API
I see this as the simple, all-purpose API for reporting PR Checks for a given commit. It's easy and doesn't require jumping through hoops to just display a simple result of a PR Check for a given commit. This API also came before the check API so it's a little less powerful.
Pros
Simple and easy API
Simple relation with context as the identifier.
Can have a fully customizable text description on the PR Checks UI.
You can create statuses as a user with a Personal Access Token (aka PAT) without needing to create a GitHub App, though does work with GitHub Apps too!
Cons
Limited status options, only allow error, failure, pending, success, no conclusion subset option to define completed jobs like Check Runs.
No concept of job timing or duration.
No grouping of statuses, like Check Runs are grouped into Check Suites by their GitHub App
No annotations
No detailed output logs. This is not too important as you could just link to the URL where the actual PR Check was run such as in CircleCI, Jenkins, etc. But the user is not always authorized to view these runs so the output could be helpful for open source repos that have non-public CI.
GitHub Checks - API
The Checks API is the latest and greatest tool for displaying task results on commits, which can essentially do everything the Commit Statuses API can do and more. Check Runs belong to one Check Suite, one Check Suite can have many Check Runs. You can only have one Check suite per commit (i.e. head_sha) per GitHub App, attempting to create another for a given App will just return the previously created Check Suite. Thus, a new Check Run is automatically assigned to the Check Suite based on the authenticated GitHub App, you cannot manually assign Run to Suites.
Contrary to statuses, Check runs are identified by an auto-generated check_run_id and not a context string.
I haven't touched too much on the Check Suites API because they are really just a grouping of Check Runs which is pretty self-explanatory and they don't affect any of the PR Checks UI, only the grouping of Check Runs in the checks tab. One thing to note is that by default you can create a Check Run without having to first create a Check suite and GitHub will just create a new Check Suite for you.
Pros
Greater granularity of status/conclusion for a run.
A lot of power to display the result of a PR Check.
Can provide run context via output summary in Markdown.
Can create annotations for specific lines of code to add information about the analysis performed, like linting error for a given line of code. These will show up in the PR files tab UI similar to PR code comments as well as in the PR Checks tab with any other Check Run output.
Has time awareness to report on durations automatically with little effort.
Cons
A little more complicated API and relationships to manage.
Part of the description in the PR UI is auto-generated based on the status conclusion and duration of the check run task.
Cannot be created via a user PAT, must be created from an authenticated GitHub App. Read access to this API does not require authenticating as a GitHub App.
One edge case you likely won't come across but is good to know: If you are creating a new Check Run with the exact same name of an existing check run under the same authenticated app. The resulting behavior is a little strange but doing this will create the new Check Run under the same name, but will not delete the existing Check Run. However, the Check Suite will not see or link to the existing Check Run, even in the PR UI. BUT!! If you change the name of either such that the runs now have unique names, it will be linked up again. It seems GitHub just does a sort by date and then filters by unique names when looking up Check Runs in a Check Suite. This does not apply with identical names from different authenticated apps.
Comparisons
Below is a mapping of sorts to compare similar options between the Commit Statuses API and the Check Runs API. They are not exactly 1:1 but similar.
Commit Status
Check Run
Option Desciption
sha
head_sha
These are equivalent with the minor exception that the Commit Status is linked to the sha directly, whereas the head_sha is linked to the Check Suite that the Check Run belongs to.
context
name
The context is used as an identifier but both define the title of the PR Check in the PR UI. Because Check Runs are tracked by the check_run_id the name option may change without creating a new Check Run. This is not the case for context, changing the context will always create a new Commit Status. You may not have duplicate names for Check Runs created with the same GitHub App, see note above.
context*
external_id
The external_id is meant for keeping track of Check Runs without having to store ids or always keep the name constant. This is only somewhat similar to the context option but only for the purpose of identifying the Check Run.
description
output.title
The main difference here is that description gives you the full space to work with, where output.title will be displayed after the auto-generated status string.
target_url
details_url
These are somewhat equivalent, the first difference is that target_url will not show unless defined whereas details_url will default to the check run URL in the PR UI. The other difference is the Check Runs Details button on the PR Checks UI will always link the the Checks page which will present a link to the details_url if defined.
state
status
These are very similar but have slightly different allowed values but effectively appear the same in the PR UI.
N/A
conclusion
This just shows the increased power of the PR Checks API where you have more granular control over the PR Check status, though not a big difference in the UI, see example variations below.
N/A
started_at
No comparable option for Commit Statuses.
N/A
completed_at
No comparable option for Commit Statuses.
N/A
actions
No comparable option for Commit Statuses.
N/A
output
No comparable option for Commit Statuses.
N/A
output.annotations
No comparable option for Commit Statuses.
* Only somewhat similar, not a direct equivalent.
PR Checks UI Component mappings
I've taken a simple PR Check and highlighted the differences between the elements of the UI between the Commit Status API and the Check Runs API. Very similar but slight differences.
Below are example variations to relate the options to their impact on the
PR Checks UI.
Commit status variations
Check Run variations
If a check run is in an incomplete state for more than 14 days, then the Check Run's conclusion becomes stale. Only GitHub can mark Check Runs as stale.
Checks Tab in PR UI
The Checks Tab in the PR UI will display all created Check Suites and their child Check Runs and allows you to set the output option to control the content of this page. This page would also show images and annotations if you defined those too.
One final note: You cannot DELETE any Commit Status, Check Run or Check Suite once created, only updating is allowed. With these statuses being so ephemeral this is not that big of a deal but in case you were wondering.
Update (6/21/2022)
I found two more quirks to explain. First, the details_url for Check Runs does not set the value of the Details button, instead the Details button actually always redirects to the Checks page which has a link to the details_url if defined.
Second, once a Check Run status is set to 'completed', there is not way to un-complete the Check run. But you can get around this by creating a new Check Run with the same name and set that status to something other than 'completed'. See this edge case with duplicate-named Check Runs explained above in detail.
Update (6/29/2022)
As mentioned above, the Check Runs API keeps track of the duration of the run. It is important to note that the started_at time is set only once whenever a check run is created, regardless of the defined status. For example, if I trigger a CI build and set all jobs status to queued, the jobs run duration will be inaccurate. A good way to fix this is to always set the started_at time whenever you set the status to in_progress. Something as simple as const started_at = new Date().toISOString() (javascript) will do the trick.

Clearing the "Group by" drop down on the Application Insights portal

I have a new Windows Application that I am adding Application Insights to. Adding a new chart gives the ability to Group on specific custom properties using a drop down. This drop down has 65 properties that AI must have added at some point. There were not specifically added.
We have a main AppInsights that takes all events. We've also created a AppInsight for development. The list of custom properties in the drop down is different between these two, even though the source code is the same.
It makes me suspect that there is some process that creates the drop down contents based on the incoming data.
The problem here is that the code has changed, and some properties are no longer available. We want to eliminate these values from the drop down, and add the new ones.
I am perfectly happy just deleting the entire list. Is there a way to do this?
The items that are available in the group by are properties that have ever been received by the back end in data you've sent, and aren't editable.
for custom properties/metrics, there's a limit on how many properties the backend will allow before it stops collecting new named custom properties. Conceptually, think of it as the backend storing an array of 200 elements for each telemetry item you sent, and mapping each custom property name to an index, and that mapping lasts forever. (i believe at the current time that limit is 200 each, but we're working on expanding that)
so if developers did things in your dev portal, even sent one item with custom property "foo", then that property will be there forever, and takes up one of those 200 slots. They can't be deleted or cleared at the moment.
Also, the contents of the group by box is also limited to events that have sent less than some threshold of distinct values, too. (I'm not sure on that exact value, but i believe it < 100 distinct values.) So fields like Id fields, or guids, etc, will eventually stop showing up as group by options, because the group by would create N distinct buckets of 1 item.
It seems like this would be something already mentioned in the App Insights UserVoice site, or documented in the azure documentation for group by but i'm not seeing it.
The only real workaround at this time is to create a new application insights resource in azure, and start submitting data to that new resource instead of your old one. And then you have to be proactive about never submitting custom properties that you're never going to use, or mixing case, as "Property1" and "property1" will be distinct properties...
If this is a big issue for you, i'd suggest submitting it to microsoft connect as a bug, or entering a uservoice suggestion above. I'll pass this on as something that really needs to be documented in the group by thing in the azure docs, too.

GitHub Search API only return 30 results

https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed
the above query is suppose to return 76 results, and when I try to run it, it only returns 30. I guess GitHub return results in portions when it is over 30. Any idea how I can get the rest of the results?
You need to use page parameter, e.g. for next 30 page = 2
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&page=2
You can also use per_page parameter to change the default size of 30. It supports max size of 100. Like this:
https://api.github.com/search/issues?q=stress+test+label:bug+language:python+state:closed&per_page=100
More detail can be found here
The Problem: Github api response doesn't contain all the relevant data.
Solution: The api from server is limiting the amount of items the user gets and splitting it into pages (pagination). You should explicitly Specify in your request how many items you'd like to receive from server pagination engine ,using formula for Github pagination api
?page=1&per_page=<numberOfItemsYouSpecify>"
For example: I'd like to get all my collaborators info in my private repo. I'm performing curl request to Github contains: username, authentication token , Organization and repository name and api call with pagination magic.
curl -u johnDoe:abc123$%^ https://api.github.com/repos/MyOrganizationName/MyAwesomeRepo/collaborators?page=1&per_page=1000"
Explanation:
What is Pagination: Pagination is the process of splitting the contents or a section of a website into discrete pages. Users tend to get lost when there's bunch of data and with pagination splitting they can concentrate on a particular amount of content. Hierarchy and paginated structure improve the readability score of the content. Loading pages is due to the less content on each item and each page has a separate URL which is easy to refer.
In this use case Github api splits the result into 30 items per resonse, depends on the request
Github reference:
Different API calls respond with different defaults. For example, a
call to List public repositories provides paginated items in sets of
30, whereas a call to the GitHub Search API provides items in sets of
100

Expression engine: Maximum number of entries allowed per membrr group

Is it possible to limit the number of channel entries a member can create?
I would like to set a max number per member group.
Thanks
Yes, but it would require writing an extension. The logic would be something like this (assuming you're talking about limiting on the back-end ... from the front-end, if you're using a Safecracker entry form for example, you'd need to take a different approach):
use the sessions_end hook
check to make sure you're in the control panel ($this->EE->input->get('D') == 'cp')
check to make sure you're on the publish screen ($this->EE->input->get('C') ==
'content_publish')
query the database to see how many entries in exp_channel_titles with the channel_id of $this->EE->input->get('channel_id') belong to $this->EE->session->userdata('member_id')
if the result is greater than your allowed maximum, show them an error
That should get you started.

Resources