Request: Need a Query that will return the Run Steps and Step Status of the last executed Test Lab run - alm

Hello helpful HP ALM gurus,
I currently use the following query:
SELECT
CF_ITEM_NAME as "Test Set Folder Name",
CY_CYCLE as "Test Set Name",
TS_NAME as "Test Case Name",
RN_STATUS as "Test Case Status",
ST_STEP_NAME as "Test Step Name",
ST_STATUS as "Test Step Status",
ST_DESCRIPTION as "Test Step Description",
ST_EXPECTED as "Test Step Expected Result",
ST_ACTUAL as "Test Step Actual Result",
RN_HOST as "Test Host Name",
RN_TESTER_NAME as "Tester Name",
ST_EXECUTION_DATE as "Test Step Execution Date",
ST_EXECUTION_TIME as "Test Step Execution Time"
FROM STEP a, TEST b, CYCLE c,RUN d,CYCL_FOLD e
where
a.ST_TEST_ID=b.TS_TEST_ID and
c.CY_CYCLE_ID=d.RN_CYCLE_ID and
d.RN_TEST_ID=b.TS_TEST_ID and
e.CF_ITEM_ID=c.CY_FOLDER_ID and
RN_HOST IS NOT NULL and
RN_TESTER_NAME IS NOT NULL and
CF_ITEM_PATH like 'AAAAAG%'
//CF_ITEM_ID like '267%' //Comment this or CF_ITEM_PATH and use the other
ORDER BY TS_NAME,RN_RUN_ID,ST_RUN_ID,ST_EXECUTION_DATE,ST_EXECUTION_TIME ASC
Unfortunately, the problem with this query is that it requires me to run a separate query that captures all the CF_ITEM_PATH values in my multi-item ALM Project. I then have to plug in that string into the "CF_ITEM_PATH like" field and get a list.
Is it possible to create and run a query that returns at least the following data? TS_NAME, ST_STEP_NAME, ST_STATUS_ST_DESCRIPTION, ST_EXPECTED, ST_ACTUAL, ST_EXECUTION_DATE
I would like the query to pull up this data for the most recent / last executed Test Set in Test Plan. Is this possible? How can it be done? If it cannot be done, can I change my above query to use the Run ID value of a Test Set instead of the CF_ITEM_PATH to obtain the information I desire? How?
Please note that I am not experienced with SQL and would require detailed instructions.
Thank you for your help!

SELECT C1.CY_CYCLE_ID , C1.CY_CYCLE AS TEST_SET_NAME , TC1.TC_TEST_ID , T.TS_NAME
, R1.RN_STATUS , R1.RN_RUN_ID , R1.RN_RUN_NAME, R1.RN_TESTER_NAME ,S1.ST_STEP_NAME , S1.ST_DESCRIPTION , S1.ST_EXPECTED , S1.ST_ACTUAL
FROM CYCL_FOLD CF1 , CYCL_FOLD CF2 , CYCLE C1 , TESTCYCL TC1 , TEST T , RUN R1 , STEP S1
WHERE
CF1.CF_ITEM_NAME = 'Cycle One' AND
CF2.CF_ITEM_PATH LIKE CONCAT(CF1.CF_ITEM_PATH,'%') AND
C1.CY_FOLDER_ID = CF2.CF_ITEM_ID AND
TC1.TC_CYCLE_ID = C1.CY_CYCLE_ID
AND
T.TS_TEST_ID = TC1.TC_TEST_ID
AND
R1.RN_TESTCYCL_ID = TC1.TC_TESTCYCL_ID
AND
R1.RN_TEST_ID = TC1.TC_TEST_ID
AND
S1.ST_RUN_ID = R1.RN_RUN_ID

Related

CosmosDB Find average time for message to complete

I need some help with a SQL query in Cosmos. I have to find the average time it takes a message to complete during a load test from the events stored in our DB.
So far I can get the start and end times like this:
SELECT
(SELECT c.TimestampUTC WHERE c.Event = 'message-accepted') as StartTime,
(SELECT c.TimestampUTC WHERE c.Event = 'message-completed') as EndTime
FROM c
WHERE c.TrackingId = 'LoadTest' AND (c.Event = 'message-accepted' OR c.Event = 'message-completed')
But I get an error when I try to get the DateTimeDiff like this:
SELECT
(SELECT c.TimestampUTC WHERE c.Event = 'message-accepted') as StartTime,
(SELECT c.TimestampUTC WHERE c.Event = 'message-completed') as EndTime,
DateTimeDiff("second", StartTime, EndTime) as TotalTime
FROM c
WHERE c.TrackingId = 'LoadTest' AND (c.Event = 'message-accepted' OR c.Event = 'message-completed')
I am stuck here because I need the difference to use the AVG function. Any help would be appreciated.
EDIT
Here is a sample of the data stored in Cosmos
{
"PartitionKey": "LoadTest",
"RowKey": "4ee9709f-c826-4a88-9d6f-240ba439eb1d",
"TrackingId": "LoadTest",
"Event": "message-accepted",
"TimestampUTC": "2022-09-14T19:12:18.8358914Z"
}
And this is the error I am getting when trying DateTimeDiff:
"Failed to query item for container enginelog:
One of the input values is invalid."
It is not giving much info which is why I am looking for help, I am following the format for the function in the documentation here
As per sample data shared in question, the assumption is that there are two different documents with Event property value "message-accepted" and "message-completed". As David mentioned, functions can't be used on data/properties available across separate document until unless property names are same.
In order to achieve what is required, you may need to write client-side code to recursively fetch the properties value from separate documents. Please refer the link

Getting "Please rebuild this data combination" on a computer but not on another one

This is my first try at using the Power Query... I've build a "dynamic" query in which I can change the retrieved fields as well as the filtering fields and values to be used by the query.
It's working perfectly on my computer but as soon as I try to execute it on another computer, I get the "Please rebuild this data combination" error. I saw some post saying I'll have to kind of split my query but I have not been able to figure it out.
Here is what my 2 tables look like:
Condition and fields selection
and here is my Query with the error:
Query
This might not be very elegant, but it allow me, thru a VBA script, to generate the list of fields to be retrieved and to generate the condition to be used by the SQL.
Any idea why it's not working on the other computers or how to improved the solution I'm using?
Thank you!
Notes:
Hi, all my Privacy Level are already set to 'None'.
I've tried to parametrize my code but I can't figure how. The Where condition is dynamic: it could be Where Number = "1234" but in other condition, the where might be like: 'Where Assignee = "xyz"'.
Here is a simplified example of my code:
let
Source = Sql.Database("xxxx", "yyyy", [Query=
"Select network, testid
from CM3T1M1 "
& paramConditions[Conditions]{0} &
" "])
in
Source
rebuild query, Formula.Firewall
That's a feature to prevent prevent accidentally leaking data. You can change the privacy level to ignore it
See also: docs.microsoft/dataprivacyfirewall
Is the dynamic query inserting those cells into the SQL query ? Report Parameters are nice for letting the user change variables without having to re-edit the query.
Parameterized native SQL queries
from: https://blog.crossjoin.co.uk/2016/12/11/passing-parameters-to-sql-queries-with-value-nativequery-in-power-query-and-power-bi/
let
Source = Sql.Database("localhost", "Adventure Works DW"),
Test = Value.NativeQuery(
Source,
"SELECT * FROM DimDate
WHERE EnglishMonthName=#MonthName AND
EnglishDayNameOfWeek=#DayName",
[
MonthName = "March",
DayName = "Tuesday"
]
)
in
Test
Dynamic Power Query version of SQL Query
To dynamically generate this SQL Query
select NUMBER, REQUESTED_BY from SourceTable
where NUMBER = 404115
Table.SelectRows is your Where.
SelectColumns is your select
let
Source = ...,
filterByNum = 404115,
columnNames = {"NUMBER", "REQUESTED_BY"},
removedColumns = Table.SelectColumns(
Source, columnNames, MissingField.Error
),
// I used 'MissingField.Error' so you know right away
// if there's a typo or bug
// assuming you are comparing Source[NUMBER]
filteredTable = Table.SelectRows(
Source, each [NUMBER] = filterByNum
)
in
filteredTable

Finding Vertices that are connected to all current vertices

I am fairly new to graph db's and gremlin and I am having a similar problem to others, see this question, in that I am trying to get the Resource Verticies that meet the all the Criteria of a selected Item. So for the following graph
Item 1 should return Resource 1 & Resource 2.
Item 2 should return Resource 2 only.
Here's a script to create the sample data:
g.addV("Resource").property("name", "Resource1")
g.addV("Resource").property("name", "Resource2")
g.addV("Criteria").property("name", "Criteria1")
g.addV("Criteria").property("name", "Criteria2")
g.addV("Item").property("name", "Item1")
g.addV("Item").property("name", "Item2")
g.V().has("Resource", "name", "Resource1").addE("isOf").to(g.V().has("Criteria", "name", "Criteria1"))
g.V().has("Resource", "name", "Resource2").addE("isOf").to(g.V().has("Criteria", "name", "Criteria1"))
g.V().has("Resource", "name", "Resource2").addE("isOf").to(g.V().has("Criteria", "name", "Criteria2"))
g.V().has("Item", "name", "Item1").addE("needs").to(g.V().has("Criteria", "name", "Criteria1"))
g.V().has("Item", "name", "Item2").addE("needs").to(g.V().has("Criteria", "name", "Criteria1"))
g.V().has("Item", "name", "Item2").addE("needs").to(g.V().has("Criteria", "name", "Criteria2"))
When I try the following, I get Resource 1 & 2 as it is looking at all related Resources to both Criteria, whereas I only want the Resources that match both Criteria (Resource 2).
g.V()
.hasLabel('Item')
.has('name', 'Item2')
.outE('needs')
.inV()
.aggregate("x")
.inE('isOf')
.outV()
.dedup()
So if I try the following, as the referenced question suggests.
g.V()
.hasLabel('Item')
.has('name', 'Item2')
.outE('needs')
.inV()
.aggregate("x")
.inE('isOf')
.outV()
.dedup()
.filter(
out("isOf")
.where(within("x"))
.count()
.where(eq("x"))
.by()
.by(count(local)))
.valueMap()
I get the following exception as the answer provided for the other question doesn't work in Azure CosmosDB graph database as it doesn't support the gremlin Filter statement.
Failure in submitting query: g.V().hasLabel('Item').has('name', 'Item2').outE('needs').inV().aggregate("x").inE('isOf').outV().dedup().filter(out("isOf").where(within("x")).count().where(eq("x")).by().by(count(local))).valueMap(): "Script eval error: \r\n\nActivityId : d2eccb49-9ca5-4ac6-bfd7-b851d63662c9\nExceptionType : GraphCompileException\nExceptionMessage :\r\n\tGremlin Query Compilation Error: Unable to find any method 'filter' # line 1, column 113.\r\n\t1 Error(s)\nSource : Microsoft.Azure.Cosmos.Gremlin.Core\n\tGremlinRequestId : d2eccb49-9ca5-4ac6-bfd7-b851d63662c9\n\tContext : graphcompute\n\tScope : graphparse-translate-csharpexpressionbinding\n\tGraphInterOpStatusCode : QuerySyntaxError\n\tHResult : 0x80131500\r\n"
I'm interested to know if there is a way to solve my problem with the gremlin steps MS provide in Azure CosmosDB (these).
Just replace filter with where.
The query will work equally the same.

SugarCRM - very SIMPLE Logic Hook executed with delay

i have very, very simple logic hook- I am still learning and I am confused at the start.
I turn on Developer mode.
I already have field "FIRST_NAME" in Contacts module.
I Created my field "MY_FIELD" also in COntacts module.
In logic_hooks.php file I added
$hook_array['before_save'] = Array();
$hook_array['before_save'][] = Array(1, 'Value from one field to another', 'custom/modules/Contacts/my.php', 'User_hook','copy');
In my.php file I added
class User_hook {
function copy(&$bean, $event, $arguments)
{
$bean->my_field_c = $bean->fetched_row['first_name']. " - additional text";
}
}
So when I entered in First_Name value "First" I am getting in My field value "-additional text" but I should get "First- additional text."
If I go to Edit View and enter in First name field "Second" I am getting in My field value "First - additional text" but I should get "Second - additional text".
If I enetein Edit View "Third" I am getting in My field "Third - addiitional text" but I should get "Third - additional text".
So obviously my logic hook is executed with delay in one iteration- why and how to change it? This is my first hook so I am not so experience. Thanks for help
$bean->fetched_row['first_name'] will return the value of the field BEFORE you change it. You'd use this to see what the value of first_name was before the user changed it on the form.
Try using
class User_hook {
function copy(&$bean, $event, $arguments)
{
$bean->my_field_c = $bean->first_name. " - additional text";
}
}

Insert randomly generated strings as nested table

These days I am working on a small example/project of myself. What I am doing is creating n set of random strings of variable lengths. Here is what I want to obtain:
Two names of length from 3 to 25 characters.
A message ranging from 40 to 300 characters.
In my C example, I create a struct and kept inserting into this table as list. In my LUA example, I want a nested table like this:
tTableName = {
[1] = {
"To" = "Name 1",
"From" = "Name 2",
"Message" = "The first message generated"
}
[2] = {
"To" = "Name 3",
"From" = "Name 4",
"Message" = "The second message generated"
}
}
So, basically my structure goes like this:
struct PM {
char *sTo, *sFrom, *sMessage;
} PMs;
I want a similar structure/table in LUA so that I can use a table.insert method. I currently am doing it like this:
tTempTable = {
"To" = "Name 1",
"From" = "Name 2",
"Message" = "The first message generated"
}
table.insert( tTableName, tTempTable )
but I am thinking it as a wastage of a lot of processing time. Currently I am only generating a sample of 30 such PMs; but later I shall be generating *1000*s of them. Please advice.
i think you're falling into the trap of pre-maturely optimizing your code before you even know where a bottleneck is... but the following document contains a bunch of optimization info about lua in general, including tables. The guy who wrote it is one of the head architects for Lua.
http://www.lua.org/gems/sample.pdf
First of all, this isn't really a question. I'll guess you're asking if there's a more efficient way to do this? In general you want to write for clarity and don't sweat small performance gains at all unless you run into issues. But here are some notes about your code, including a few notes about efficiency:
The table constructor posted isn't valid. Either of the following fixes would work:
tTempTable = {
["To"] = "Name 1",
["From"] = "Name 2",
["Message"] = "The first message generated"
}
tTempTable = {
To = "Name 1",
From = "Name 2",
Message = "The first message generated"
}
You don't need to specify numerical indexes when constructing an array. You can replace this:
tTableName = {
[1] = { To = "Name 1", From = "Name 2", Message = "The first message generated" },
[2] = { To = "Name 3", From = "Name 4", Message = "The second message generated" },
}
With this, which means the exact same thing but is more succinct:
tTableName = {
{ To = "Name 1", From = "Name 2", Message = "The first message generated" },
{ To = "Name 3", From = "Name 4", Message = "The second message generated" },
}
This also happens to be more efficient; Lua can preallocate the array size it needs, whereas it's not smart enough to do that with the previous constructor.
As for a better way to write this in general, it's hard to say without knowing more about your application. If you're just trying to test some PM code, why not just generate the strings you need on the fly at the point of use? Why preallocate them into a table at all?
If you must preallocate, you don't have to store them as structured data. You could just have three arrays: ToNames, FromNames, Messages, then select from them at random at the point of use:
local to = ToNames [ math.random(1,#ToNames ) ]
local from = FromNames[ math.random(1,#FromNames) ]
local message = Messages [ math.random(1,#Messages ) ]
TestPM(to, from, message)

Resources