When using Gauge for UI Automation is there a way to get all screenshots in the same folder as the report - getgauge

I am using the Gauge framework for UI Automation.
My hope is that I am able somehow to write screenshots to the folder being made to contain the html report for an execution. The trick is that the folder and its contents for a report are not made until after execution completes so I can't write screenshots inside the reports folder as I am taking them during my tests.
Currently the screenshots that I take are written to a folder in the reports folder (root level) of the project. When trying to copy the entire html report to another location I have to also move the screenshots and then have to manually manage the screenshots in that extra screenshots folder as I delete old reports. There are a lot of after the fact steps that I could do but was hoping for a simpler solution.
I am hoping that I was missing something and there was a way to write the screenshots into each reports folder (when reports are not being overwritten) so that I don't have to mangage (move/delete) the screenshots separately and so that the links in the report to the screenshots stay consistent.

Screenshots in gauge's html-report are embedded inside the html as a base64 encoded string. So they currently don't exist as a separate file that you can manipulate.
Some options for your use case:
1) Implement a Custom Screenshot Grabber and intercept the screenshot, save it to any location you like.
2) build a custom reporting plugin (ex, build on this seed example) and collect the screenshots independent of the html report.
If you do not want the reports to be overwritten, you can set overwrite_reports=false in your properties file. See ref.
Note that you can also change the html-report's theme if you want a different report structure.

Related

saving mail attachment with PowerAutomate to sharepoint corrupts the file

I am trying to build a flow based on the PowerAutomate template
Create Planner task and add attachments to SharePoint on new email
arrival
This template works fine, in that it saves all the mail attachments to my sharepoint. But it only shows the link to the last attachment in the task.
I have worked around it, by adding a string variable and appending all the sharepoint paths to this variable.
With my Flow, everything runs smoothly. But the stored files are about 10%- 20% bigger in size than the original and they turn out to be corrupted.
The only difference I can spot in the saving of the file is as follows:
Template section has "get attachment" and the according "body('get attachment'):
While my in my version I can only select "get attachment (V2)" and the corresponding "body('get attachment (V2)')
There is an option with V2 that allows or disallows chunking, but there is no effect on my filesize.
The other difference is, that I have my flow create a different folder based on the task ID, since there where errors, if the same name attachment came a second time. But I have tried my flow without the added folders and there is no difference in file size.
The original files:
and the corrupted files:
It makes no difference if I use the sharepoint link provided through the flow to my new planner task, or if I open the files directly within sharepoint. The result is an error.
Can anyone guess, why my flow seems to store something more within the file and thus corrupting it? I can provide the other parts of the flow in more detail too. Here is the overview of my custom flow:
I actually found the answer after rewriting it from scratch:
Using the old template had me looking for the wrong information when adding the attachment content to sharepoint. I had always searched for "body" which was used in the template and gave me this
But searching for attachment the dynamic content actually showed me the right pieces. I am not sure, if I missed it before, or if recoding a template hid them somehow. With the rewrite from scratch I found this:
So, to make a long story short: Use "Content Bytes" of the "Get_Attachment_(V2)" Method and everything works fine.

Incorrect code coverage percentage show up on the summary tab of the azure yaml build pipeline

On the summary tab, code coverage is shown as 57%. However, when I open the code coverage results file on the Code Coverage tab in Visual studio locally, it shows 84% code covered. What might be the reason behind it?
Please let me know if any more information is needed.
What might be the reason behind it?
You can open the code coverage results file and compare it to the file in VS to see how they differ.
One possible reason is that the code coverage in Azure DevOps includes additional .dlls.
You can open the code coverage results file to see whether it adds .dlls that you don't want.
If so, you can try the following solutions:
Use the run settings file to specify which .dlls to include. Note: Do not use exclude filters, but use include filters to cover what you want. You can click the document Customize code coverage analysis for detailed information and steps.
Use /ALLOBIND (C++) or ExcludeFromCodeCoverageAttribute Class (C#).
Delete all .pdb files and change your build process.

Extract multiple Cognos report definitions

In COGNOS is there a way to get the definitions (filters, selected fields) from a number of reports in a folder?
I've inherited around 500 reports defined in a folder and they all need to be checked and fixed as they have business errors (not technical errors). If it was possible to get all their definitions in a single extract that would save an enormous amount of time having to click multiple times to get that information from each report one by one.
In ACCESS this can be done with VBA (for query definitions), but I'm not sure if there is a scripting language that can be used with COGNOS to achieve a similar result.
It sounds like you may want to "validate" each of these 500 reports (effectively equivalent to pressing the "validate" button on each individual report if it was open in the authoring studio).
Validation will ensure that a report specification XML is still syntactically correct, references a package which is still present the content store, references only query items from that package which still exist, generates valid SQL vs. the underlying datasource, etc.
If that's what you're looking for, an easy way to do batch validation for all 500 reports would be to use MotioPI (its a free admin tool for Cognos). Here's a short article which walks you through the process:
http://info.motio.com/Blog/bid/70357/Batch-Validation-of-Cognos-Reports
If you're wanting to retrieve the actual report specification (XML) for each of these 500 objects, then you'd need to write a program which utilizes the Cognos SDK to retrieve the specification XML from each of the 500 report objects. After that, you'd need to add logic which examines each of these 500 XML documents, looking for whatever it is you're looking for.
We solved this by exporting the XML of the reports using a SQL query on the content store.
The output is processed with a Python script to convert XML to table layout in CSV format.
This CSV-file can easely be imported in Excel.
You might want to process the reports XML directly in a SQL query with the xmltable function. In our situation this turned out to be a heavy proces we don't want to burden the content store database with. For a small set of reports this is working fine though.

CruiseControl.Net - inclusion of html report? (I get 'Unable to find file')

I have build that produces a (NCover 3.4 Summary) html report.
I'd like to configure the Dashboard to show the html report.
The report is produced perfectly in working folder during the build - my problem is referencing the report from the Dashboard. Should I do something to store it from working folder into the 'cc.net build records'? I don't really understand the inner workings there...
My use of plugin in the Dashboard.config is shown below. I don't know what I should use actionName for and have left it with value from documentation.
The link in CC.Net resolves as: http://DummyServerName/ccnet/server/local/project/DummyProject/build/log20101221100723Lbuild.2.0.0.176.xml/viewReport.aspx
Thanks for any comments,
Anders, Denmark
<htmlReportPlugin description="NCover Summary" actionName="viewReport" htmlFileName="coverage_summary.html" />
From the CCNET Documentation [1] :
This plug-in can display any file that
is in the build folder under artefacts
folder for the project. It cannot
display files from any other location
(for security reasons). Files can be
published to a build folder using the
File Merge Task. This will
automatically generate the correct
folder structure for the HTML reports.
Either way, you can generate an xml report, merge it into the ccnet log and use an xsl to display it in the dashboard/emails.[2]
[1] http://build.nauck-it.de/doc/CCNET/HTML%20Report%20Plugin.html
[2] http://docs.ncover.com/how-to/continuous-integration/cruisecontrol-net/

Append a dynamically changing watermark to a PDF in SharePoint

This is primarily a question of possibilities more than instructions. I'm a programming consultant working on a WSS project site system for my client. We have a document library in which files are uploaded to go through a complex approval process. With multiple stages in this process, we have an extra field which dictates what the current status of the document is.
Now, my client has become enamored with the idea of PDF watermarking. He wants the document (which is already a PDF) to be affixed with a watermark corresponding to the current status, such that with each stage of the approval process the watermark will change.
One method, the traditional method for PDF watermarking, of accomplishing this is to have one "clean" copy of the document somewhere hidden on the site, and create a new PDF from it that has the watermark at each stage of the approval process. Since the filename will never change, this new PDF can be uploaded continually to a public library, always overwriting the old version and simulating a "dynamically changing watermark". However, in the various stages there will also be people uploading clean copies with corrections and suggestions, nevermind the complex nature of juggling around two libraries and the fact we double the number of files stored. My client and I agree that this is not a practical path to choose.
What we would like to do is be able to "modify" the watermark in a PDF, so that we only have to keep one copy of the file. Unfortunately, from what I've seen, in most cases when you make something like a watermark, which in its nature is supposed to be "unmodifyable", you won't be able to edit it later. So, is it possible to have a part of a PDF which cannot be changed by anyone who downloads the file, but can be changed as part of a workflow or other object model process?
PDF Watermarking in SharePoint is a common request. I have written extensively on this topic. See:
Adding a dynamic watermark to a PDF file from a SharePoint Workflow
Adding a (static) watermark to a PDF file from a SharePoint Workflow
Use SharePoint Workflows to inject JavaScript into PDFs and print the ‘open date’
You could use Event Handlers such that code was run every time a document was checked in. In that code you could perform the fixup/check that made the watermark be what you wanted it to be. This assumes you can write code that manipulates a PDF's internal structure such that it has the watermark that you desire.
It sounds to me like you want to allow people to modify the PDF they download, but not modify its watermark. This is probably going to be nigh on impossible if the watermark is embedded in the PDF (afaict) but what if the watermark image is external to the PDF; is it possible to embed a watermark in a PDF that is sourced via HTTP? Then you could embed:
<watermark image="http://sharepoint/site/_vti_bin/docstatus.asmx?id=5">
Of course, I have no idea about PDFs, so this might not be possible but you get the concept.
-Oisin
It is possible to do so if you use third party tool. Then you can put dynamically binded value from your SharePoint metadata, conditions, rules etc: http://www.pdfsharepoint.com

Resources