How to attach a markdown page to GitHub Actions workflow run summary? - github-api

The GitHub Action "dotnet-tests-report" attaches a markdown page with test results to the Github Action workflow run summary. This is really nice. Once the workflow has finished, it becomes immediately clear what the results are. Clear in a visual way.
It is open source but the code is complicated so I still did not figure out how to do this.
What I want is this:
Run some command-line statement that generates a markdown file
Run some code that "publishes" this
Attach it to the summary in Github Actions
Be happy that everyone in my company can see the results attached to the workflow

It uses the GitHub API to create a check run.
POST https://api.github.com/repos/{owner}/{repo}/check-runs
When creating or updating a checkrun, you can specify the output paramter in the request body. The action dotnet-tests-report use the text property of the output parameter for the report:
Properties of the output object
Description
title (string)
Required. The title of the check run.
summary (string)
Required. The summary of the check run. This parameter supports Markdown.
text (string)
The details of the check run. This parameter supports Markdown.
annotations (array of objects)
Adds information from your analysis to specific lines of code. Annotations are visible on GitHub in the Checks and Files changed tab of the pull request. The Checks API limits the number of annotations to a maximum of 50 per API request. To create more than 50 annotations, you have to make multiple requests to the Update a check run endpoint. Each time you update the check run, annotations are appended to the list of annotations that already exist for the check run. For details about how you can view annotations on GitHub, see "About status checks". See the annotations object description for details about how to use this parameter.
images (array of objects)
Adds images to the output displayed in the GitHub pull request UI. See the images object description for details.
See the code of the action: https://github.com/zyborg/dotnet-tests-report/blob/237826dc017f02ebf61377af95d1a12f8409a527/action.ps1#L133-L149
$url = "https://api.github.com/repos/$repoFullName/check-runs"
$hdr = #{
Accept = 'application/vnd.github.antiope-preview+json'
Authorization = "token $ghToken"
}
$bdy = #{
name = $report_name
head_sha = $ref
status = 'completed'
conclusion = $conclusion
output = #{
title = $report_title
summary = "This run completed at ``$([datetime]::Now)``"
text = $reportData
}
}
Invoke-WebRequest -Headers $hdr $url -Method Post -Body ($bdy | ConvertTo-Json)

The May 2022 post "Supercharging GitHub Actions with Job Summaries" from Konrad Pabjan does mention:
Actions users have been asking for this type of functionality for a long time.
User-generated content from Actions has previously been limited to logs and annotations. It can be difficult to aggregate and group lots of information.
Annotations are important when it comes to highlighting things, like errors and warnings, and are not suited for rich output, like test summaries or build reports.
To get around these issues, we’ve even seen users manually create check runs using our API using the GITHUB_TOKEN that is provided as part of a run, which leads to decreased productivity.
(That is what riQQ's answer describes, which is not optimal)
It’s clear that there’s been a feature gap, forcing users to improvise with less than ideal solutions, which is why we’ve developed Job Summaries!
So:
GitHub Actions Job Summaries allow for custom Markdown content on the run summary generated by each job.
Custom Markdown content can be used for a variety of creative purposes, such as:
Aggregating and displaying test results
Generating reports
Custom output independent of logs
Create summaries
Simply output Markdown content to a new environment variable we’ve introduced called $GITHUB_STEP_SUMMARY.
Any Markdown content added to this file will then be displayed on the Actions run summary page.
That’s it!
steps:
- name: Adding markdown
run: echo ‘### Hello world! :rocket:’ >> $GITHUB_STEP_SUMMARY
Or, running tests as part of CI:

While looking for a similar solution I found there is an action available now:
https://github.com/LouisBrunner/checks-action,
It actually saves you from crafting the request, and allows even to supply markdown file, like the following:
- name: Create CheckRun for code Coverage
uses: LouisBrunner/checks-action#v1.2.0
with:
token: ${{ secrets.GITHUB_TOKEN }}
name: Code Coverage
conclusion: ${{ job.status }}
output: "{\"summary\":\"Code Coverage\"}"
output_text_description_file: coveragereport/Summary.md

Related

Github checks API vs check runs vs check suites

I want to understand Github Checks API so that I can use them to retrieve data. On following Github documentation https://docs.github.com/en/rest/guides/getting-started-with-the-checks-api I am able to derive that check runs are associated with the sha of the change and at each commit on branch check suite is created. Checks API helps in getting all this information. But I want more clarity on three of them in terms of differences. Can anyone please explain these three terms using simple example and terms?
So, first of all, the GitHub Commit Statuses API is separate from the GitHub
Checks API (includes suites & runs), so let's look at them individually first then I'll explain the differences.
Before we get into it, I want to differentiate a PR Check from a Check Run, to avoid confusion. A PR Check describes the current state (trying not to say status here 😉) of a given job or task running in CI or elsewhere on a specific PR commit. These can be created via either a Commit Status or a Check Run. All the items in the pink box below are PR Checks, notice the Hide all checks button.
GitHub statuses - API
I see this as the simple, all-purpose API for reporting PR Checks for a given commit. It's easy and doesn't require jumping through hoops to just display a simple result of a PR Check for a given commit. This API also came before the check API so it's a little less powerful.
Pros
Simple and easy API
Simple relation with context as the identifier.
Can have a fully customizable text description on the PR Checks UI.
You can create statuses as a user with a Personal Access Token (aka PAT) without needing to create a GitHub App, though does work with GitHub Apps too!
Cons
Limited status options, only allow error, failure, pending, success, no conclusion subset option to define completed jobs like Check Runs.
No concept of job timing or duration.
No grouping of statuses, like Check Runs are grouped into Check Suites by their GitHub App
No annotations
No detailed output logs. This is not too important as you could just link to the URL where the actual PR Check was run such as in CircleCI, Jenkins, etc. But the user is not always authorized to view these runs so the output could be helpful for open source repos that have non-public CI.
GitHub Checks - API
The Checks API is the latest and greatest tool for displaying task results on commits, which can essentially do everything the Commit Statuses API can do and more. Check Runs belong to one Check Suite, one Check Suite can have many Check Runs. You can only have one Check suite per commit (i.e. head_sha) per GitHub App, attempting to create another for a given App will just return the previously created Check Suite. Thus, a new Check Run is automatically assigned to the Check Suite based on the authenticated GitHub App, you cannot manually assign Run to Suites.
Contrary to statuses, Check runs are identified by an auto-generated check_run_id and not a context string.
I haven't touched too much on the Check Suites API because they are really just a grouping of Check Runs which is pretty self-explanatory and they don't affect any of the PR Checks UI, only the grouping of Check Runs in the checks tab. One thing to note is that by default you can create a Check Run without having to first create a Check suite and GitHub will just create a new Check Suite for you.
Pros
Greater granularity of status/conclusion for a run.
A lot of power to display the result of a PR Check.
Can provide run context via output summary in Markdown.
Can create annotations for specific lines of code to add information about the analysis performed, like linting error for a given line of code. These will show up in the PR files tab UI similar to PR code comments as well as in the PR Checks tab with any other Check Run output.
Has time awareness to report on durations automatically with little effort.
Cons
A little more complicated API and relationships to manage.
Part of the description in the PR UI is auto-generated based on the status conclusion and duration of the check run task.
Cannot be created via a user PAT, must be created from an authenticated GitHub App. Read access to this API does not require authenticating as a GitHub App.
One edge case you likely won't come across but is good to know: If you are creating a new Check Run with the exact same name of an existing check run under the same authenticated app. The resulting behavior is a little strange but doing this will create the new Check Run under the same name, but will not delete the existing Check Run. However, the Check Suite will not see or link to the existing Check Run, even in the PR UI. BUT!! If you change the name of either such that the runs now have unique names, it will be linked up again. It seems GitHub just does a sort by date and then filters by unique names when looking up Check Runs in a Check Suite. This does not apply with identical names from different authenticated apps.
Comparisons
Below is a mapping of sorts to compare similar options between the Commit Statuses API and the Check Runs API. They are not exactly 1:1 but similar.
Commit Status
Check Run
Option Desciption
sha
head_sha
These are equivalent with the minor exception that the Commit Status is linked to the sha directly, whereas the head_sha is linked to the Check Suite that the Check Run belongs to.
context
name
The context is used as an identifier but both define the title of the PR Check in the PR UI. Because Check Runs are tracked by the check_run_id the name option may change without creating a new Check Run. This is not the case for context, changing the context will always create a new Commit Status. You may not have duplicate names for Check Runs created with the same GitHub App, see note above.
context*
external_id
The external_id is meant for keeping track of Check Runs without having to store ids or always keep the name constant. This is only somewhat similar to the context option but only for the purpose of identifying the Check Run.
description
output.title
The main difference here is that description gives you the full space to work with, where output.title will be displayed after the auto-generated status string.
target_url
details_url
These are somewhat equivalent, the first difference is that target_url will not show unless defined whereas details_url will default to the check run URL in the PR UI. The other difference is the Check Runs Details button on the PR Checks UI will always link the the Checks page which will present a link to the details_url if defined.
state
status
These are very similar but have slightly different allowed values but effectively appear the same in the PR UI.
N/A
conclusion
This just shows the increased power of the PR Checks API where you have more granular control over the PR Check status, though not a big difference in the UI, see example variations below.
N/A
started_at
No comparable option for Commit Statuses.
N/A
completed_at
No comparable option for Commit Statuses.
N/A
actions
No comparable option for Commit Statuses.
N/A
output
No comparable option for Commit Statuses.
N/A
output.annotations
No comparable option for Commit Statuses.
* Only somewhat similar, not a direct equivalent.
PR Checks UI Component mappings
I've taken a simple PR Check and highlighted the differences between the elements of the UI between the Commit Status API and the Check Runs API. Very similar but slight differences.
Below are example variations to relate the options to their impact on the
PR Checks UI.
Commit status variations
Check Run variations
If a check run is in an incomplete state for more than 14 days, then the Check Run's conclusion becomes stale. Only GitHub can mark Check Runs as stale.
Checks Tab in PR UI
The Checks Tab in the PR UI will display all created Check Suites and their child Check Runs and allows you to set the output option to control the content of this page. This page would also show images and annotations if you defined those too.
One final note: You cannot DELETE any Commit Status, Check Run or Check Suite once created, only updating is allowed. With these statuses being so ephemeral this is not that big of a deal but in case you were wondering.
Update (6/21/2022)
I found two more quirks to explain. First, the details_url for Check Runs does not set the value of the Details button, instead the Details button actually always redirects to the Checks page which has a link to the details_url if defined.
Second, once a Check Run status is set to 'completed', there is not way to un-complete the Check run. But you can get around this by creating a new Check Run with the same name and set that status to something other than 'completed'. See this edge case with duplicate-named Check Runs explained above in detail.
Update (6/29/2022)
As mentioned above, the Check Runs API keeps track of the duration of the run. It is important to note that the started_at time is set only once whenever a check run is created, regardless of the defined status. For example, if I trigger a CI build and set all jobs status to queued, the jobs run duration will be inaccurate. A good way to fix this is to always set the started_at time whenever you set the status to in_progress. Something as simple as const started_at = new Date().toISOString() (javascript) will do the trick.

Grouping steps or concatenating scenarios in Gherkin

I'm defining a feature to be used in a BDD workflow with tools such as Behat or Cucumber, using the Gherkin language. This is the feature definition so far:
Feature: Save Resource
In order to keep interesting resources that I want to view later
As a user
I need to be able to save new resources
Scenario: Saving a new resource
Given "http://google.com" is a valid link
When I insert "http://google.com" as the new resource link
And I insert "Google Search Engine" as the new resource title
Then I should see a confirmation message with the inserted link and title
When I accept
Then I should have 1 additional resource with the inserted link and title
Scenario: Aborting saving a new resource
Given "http://google.com" is a valid link
When I insert "http://google.com" as the new resource link
And I insert anything as the new resource title
Then I should see a confirmation message with the inserted link and title
When I abort
Then I should have the same number of resources as before
Scenario: Saving a resource with invalid link
Given "http://invalid.link" is an invalid link
When I insert "http://invalid.link" as the new resource link
And I insert anything as the new resource title
Then I should see a confirmation message with the inserted link and title
When I accept
Then I should see an error message telling me that the inserted link is invalid
Scenario: Saving a resource with already saved link
Given "http://google.com" is an already saved link
When I insert "http://google.com" as the new resource link
And I insert anything as the new resource title
Then I should see a confirmation message with the inserted link and title
When I accept
Then I should see an error message telling me that the inserted link already exists
As you can see, there's a lot of boilerplate repeated in several scenarios. I know I could define a Background with a list of steps to be executed before all scenarios, where I could put all the steps up to the confirmation message, but if I did that, I wouldn't be able to distinguish among different links and titles that the user might have inserted.
Is it possible to define a background that is used only for certain, and not all, scenarios? Or maybe to concatenate two scenarios, for example requiring that a certain scenario (that I can reuse) is run before another? Or should I simply keep repeating the boilerplate?
Following on from Thomas' answer
Your scenarios are complicated and repetitive because each time they are describing 'how' the user interacts with the application. 'How' has no place in scenarios because
The details of how you do things are likely to change as you learn more about what you are doing. You don't want to have to change your scenarios every time you change a detail of how you do something
Putting how in you scenarios makes them boring, repetitive, difficult to read and expensive to implement.
Putting how is your scenarios is generally a way of avoiding doing the real work that scenarios are here for, which is to find out 'why' your are doing something and define elegantly and succinclty 'what' you are trying to achieve.
n.b. there are lots of other reasons
Lets look at how simple your scenarios become when you do this extra bit of work
##Old
Scenario: Saving a new resource
Given "http://google.com" is a valid link
When I insert "http://google.com" as the new resource link
And I insert "Google Search Engine" as the new resource title
Then I should see a confirmation message with the inserted link and title
When I accept
Then I should have 1 additional resource with the inserted link and title
##New
Scenario: Bookmarking
When I save a bookmark
Then my bookmark should be saved
##Old
Scenario: Saving a resource with invalid link
Given "http://invalid.link" is an invalid link
When I insert "http://invalid.link" as the new resource link
And I insert anything as the new resource title
Then I should see a confirmation message with the inserted link and title
When I accept
Then I should see an error message telling me that the inserted link is invalid
##New
Scenario: Bookmark with invalid link
When I bookmark with an invalid link
Then I should see an invalid link error
etc.
Note how the new scenarios illustrate so clearly just how little you are doing in terms of adding business value!!
As for your question about backgrounds, if you need two different backgrounds you need two different .feature files. This is a good thing!!
I would consider thinking about what your scenarios are supposed to do. Currently I see scripts that talks about how stuff is done. Almost nothing about what should be done.
Navigation details isn’t really useful for understanding what the system is supposed to do. This is a well known beginners mistake as described in [1] and transcribed in [2].
What you want to do is to push the UI details down the stack. Navigation details lives better in helper methods/classes used by the steps you implement. A page object is one way of hiding navigation from the Scenarios.
Some of your duplication will disappear when you get rid of the navigation details. The remaining duplication might be acceptable for readability reasons.
Remember that understanding the scenarios are much more important than a bit of duplication.
[1] https://cucumber.io/blog/2016/05/09/cucumber-antipatterns
[2] http://www.thinkcode.se/blog/2016/06/22/cucumber-antipatterns

what is parameterisation concept in HP ALM ?

i need help on HP ALM Parameterization concept. what is use of Parameterization concept in HP ALM. how can we implement this concept ? give me any examples
In terms of manual test cases, parametrization comes very handy when you have scenarios that have pretty much same steps or at least same structure.
When you create a new test case, you will have a button to parameters on the header menu, you can list all the parameters you think you would need. For a simple login test on multiple pages ( home page, account page ) with similar layout, you will have username, password and url as parameters. It also allows you to give default value. You can add more parameter as you write steps in the step description window.
once you have the complete test case created, go to test configuration tab of the test case and add test configuration by clicking plus button. There will be one by default. You can create as many test configuration as you want. For above example you can add three; one for home page, one for account page and one for search page. The test configuration will help you have the test instances ready while pulling the test cases into test lab without having to drag the same test cases multiple times.
The test configuration allows you to define the actual/default value for each of configuration added reducing the time it takes to fill this info during execution.
Test execution - after you pull the test cases into lab, during the manual run a pop up comes to show the values for each of parameter being used for that test case. Te values can be added or updated.
Please find a video I found on YouTube.
https://youtu.be/vCrJcHrosio
I don't have ALM to show with diagrams. Hope this helps.
Create a test testcase to test this feature.
Thanks
Happy exploring !!
Parameters help the user to assign a value to a variable so as to execute the same test with different sets of data.
For example, the user name and password can be two parameters, for a login related test, which would be assigned with a value.
How to use parameters:
Create test case with steps in Test Plan module under selected Cycle and proper folder.
Select the test step against which you would like to add the parameter. The 'Parameter' Icon will be enabled. Click on the same. The 'Parameter' dialog box will open. Now, click on 'New Parameter' button. Add the parameter (here: username or password). Add the default and/or actual values, description for the same and save.
You can also add different test configurations to use these parameters for different use case scenarios.
More details can be found here: http://alm-help.saas.hpe.com/en/12.53/online_help/Content/UG/ui_menu_test_parameters.htm

Edit Dynamic Values in Workflows

I am trying to retrieve and extract specific data from incoming emails in Microsoft Dynamics CRM to use them in workflows (for update records).
The only option i can find so far while working with workflows is to get the full subject or the full body of the email.
Is there a way to extract specific part of these two?
For example, how can i extract from the Subject the first 10 characters or how can i search the Subject or Body for specific characters or filter with REGEX?
I don't want to create a custom plugin, but the use of JavaScript would be great if it can be used to automatically get triggered without any user action.
Unfortunately, OOB workflow functionality does not allow you to manipulate the data within these fields. Javascript (in the context of CRM) is a client side scripting tool, so could not be run without user interaction.
I would suggest creating a Custom Workflow Activity that takes the subject and body values as parameters (from your original workflow). Then within the custom workflow you can perform string manipulation using common C# commands, and then return these values to the original workflow or update/create records within your custom workdflow.
The following URL gives a good example of creating a Custom Workflow Activity.
https://msdn.microsoft.com/en-gb/library/gg334455.aspx

How to write declarative Cucumber features for describing CRUD operations?

I understand the difference between imperative and declarative cucumber steps, but I have not seen any real world examples of this. I always feel like my feature files are becoming too verbose.
It seems like there would need to be a cucumber feature for each step in the life cycle:
foobars/list_foobars.feature
foobars/create_foobar.feature
foobars/view_foobar.feature
foobars/edit_foobar.feature
foobars/delete_foobar.feature
In the create feature alone, it seems like you would want to list out the fields that can be entered, which ones are required, what happens when you enter invalid data, etc. I don't know of a declarative way to do this. Of course in subsequent features, you would just say Given a foobar exists rather than going through all the steps to create one.
How detailed do you go when describing your application's behavior? Can you provide some examples of feature files that you feel are acceptably complete?
I like to keep cucumber tests human readable, so assuming we have a story for editing a foobar with invalid data, I'd want a scenario like:
# foobars/edit_foobar.feature
Feature: As a user, I want to edit a Foobar, so I can Baz
Scenario: Validation Errors
Given I am logged in as a user
And a foobar exists
And I edit the foobar with invalid data
Then I should see validation errors
I think that captures what we want out of the story, without having to deal with all the details of which fields to edit, what buttons to submit, etc. It doesn't test all the possible cases, but those should really be tested via unit tests (model tests that the validations are set, and controller tests that the flash messages are set or request tests that the errors are being served).
The other scenarios are similar:
Scenario: Successful Edit
Given I am logged in as a user
And a foobar exists
And I edit the foobar with valid data
Then I should see the valid data
Some people will want to specify the valid data as part of the test itself, but I personally prefer to delegate these to the step definitions in order to keep the scenarios clean. You just need one example to make sure the golden case works, because again this isn't the appropriate place to test that all the form fields work (and it will become a maintenance headache if you do specify every single field).
I am thinking maybe not test this at all using Cucumber, instead just make a comment in the Feature section.
Alternatively maybe one can do something like this:
# categories.feature
Scenario: Manage categories
Given I want to manage categories
When I <crud_type> a category
Then I should be taken to the listing page
Examples:
| crud_type |
| create |
| edit |
| delete |
Scenario: View category
Given I want to view a particular category
When I click on a category in the category list
Then I should see that category's view page

Resources