Auto-correlation callback function issue - loadrunner - performance-testing

I'm working in new application written in Siebel 8.1, issue appears when I'm trying to replay script and I can't handle that.
Replay Output:
Error -27086: Auto-correlation callback function
"flCorrelationCallbackParseWebPage" failed (rc=1) for parameter
"Siebel_Parse_Web_Page40"
web_reg_save_param("Siebel_Parse_Web_Page40",
"LB/IC=",
"RB/IC=",
"Ord=1",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseWebPage",
"AutoCorrelationDll=LrwiSiebelCorrelationWrapper",
LAST);
I have done all steps for prepare record options from: http://software-qe.blogspot.se/2008/01/siebel-7x-record-and-replay-for.html
I'm using Loadrunner 11.52 (Siebel Web protocol), IE8.

We've been using the autocorrelation library for quite a few years on my team and we see this a lot. Unfortunately, it's not an easy problem to diagnose.
First I would check your test results and your VUser log to see if something happened before the autocorrelation failed. (Make sure your logging is set to parameter substitution in runtime settings).
Check your parameter files for extra spaces, commas, etc. Sometimes I've seen that error right after it rejects something about your parameter file.
Worst case scenario, your script is corrupted and you'll have to start over. We've gotten in the habit of making frequent backups of our scripts just because of this issue. Usually, we'll be able to start from our backup and continue or create a new script and paste the old code in. Autocorrelation error "magically" goes away with the same code in a new script.

If auto(magical)correlation does not work then use manual correlation.
Record twice with same data: Compare. You will find session, state and time data.
Change the credentials: Re-record. Compare. You will find credential related correlation
Change the business record but keep the same business process. Re-Record. You will find the business related correlation.
Do not expect autocorrelation to provide a magical working script. You have about a 0.0001% chance of that happening without LoadRunner script development intervenetion.

Related

Declarative Pipeline using env var as choice parameter value

Disclaimer: I can achieve the behavior I’m looking for with Active Choices plugin, BUT I really want this to work in a Jenkinsfile and controlled with scm because it’s tedious to configure the Active Choices on each job we may need them on. And with it being separate from the Jenkinsfile creation, it’s then one job defined in multiple places. :(
I am looking to verify if this is possible, because I can’t get the syntax right, if it is possible. And I haven’t been able to find any examples online:
pipeline {
environment {
ARTIFACTS = lib.myfunc() // this works well
}
parameters {
choice(name: "Artifacts", choices: ARTIFACTS) // I can’t get this to work
}
}
I cannot use the function inline in the declaration of the parameter. The errors were clear about that, but it seems as though I should be able to do what I’ve written out above.
I am not home, so I do not have the exceptions handy, but I will add them soon. They did not seem very helpful while I was working on this yesterday.
What have I tried?
I’ve tried having the the function return a List Because it requires a list according to the docs, and I’ve also tried (illogically) returning a String in the precise syntax of a list of strings. (It was hacky, like return "['" + artifacts.join("', '") + "']" to look like ['artifact1.zip', 'artifact2.zip']
I also tried things like "$ARTIFACTS" and ${ARTIFACTS} in desperation.
the list of choices has to be supplied as String containing new line characters (\n): choices: 'TESTING\nSTAGING\nPRODUCTION'
I was tipped off by this article:
https://st-g.de/2016/12/parametrized-jenkins-pipelines
Related to a bug:
https://issues.jenkins.io/plugins/servlet/mobile#issue/JENKINS-40358
:shrug:
First, we need to understand that Jenkins starts running your pipeline code by presenting you with Parameters page. Once you've set up the parameters, and pressed Build, then a node is allocated, variables are set, and your code starts to run.
But in your pipeline, as presented above, you want to run some code to prepare the parameters.
This is not how Jenkins usually works. It's definitely not doing the following: allocating a node, setting the variables, running some of your code until parameters clause is reached, stopping all that, presenting you with GUI, and then continuing where it left off. Again, it's not how Jenkins works.
This is why, when writing a new pipeline, your first option to build it is Build and not Build with Parameters. Jenkins hasn't run your code yet; it doesn't have any idea if there are any parameters. When running for the first time, it will remember the parameters (and any choices, if were) as were configured for this (first) run, so in the second run you will see the parameters as configured in the first run. (Generally, in run number n you will see the result of configuration in run number n-1.)
There are a number of ways to overcome this.
If having a "somewhat recent" (and not "current and absolutely up-to-date") situation fits you, your code may need minor changes to work — second time. (I don't know what exactly lib.myfunc() returns but if it's a choice of Development/Staging/Production this might be good enough.)
If having a "somewhat recent" situation is an absolute no-no (e.g. your lib.myfunc() returns the list of git branches, and "list of branches as of yesterday" is unacceptable), then your only solution is ActiveChoice. ActiveChoice allows you to run some code before showing you the Build with Parameters GUI (with script approval etc.).

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

How to set current date time in Configuration Block

i follow the instruction(below link) to set trigger to the current date time in Configuration Block
but the trigger={date}{time}; does not work, it return error
" the configuration block was not well-formed."
who know the right expression for the current date? thanks a lot
https://support.tibco.com/s/article/How-to-append-rows-and-update-data-table-on-a-frequent-basis
this looks like either a typo on the article, or a bug in the Automation Services Job Builder. you can get around this message by surrounding the values with quotes, so
trigger="{date}{time}";
while the quotes are not required (according to Configuration Block documentation), I would argue that it's a best practice because you never know if the value you're passing is going to jank up the configuration block parser.
also a tip: you can and probably should test any configuration blocks in the Web Player before deploying a job in Automation Services. when doing this, don't forget to URLEncode, like, everything. here's an example from the documentation I linked above:
http://spotfire.cloud.tibco.com/spotfire/wp/OpenAnalysis?file=/Gallery/Introduction%20to%20Spotfire&configurationBlock=SetFilter(
tableName=%22World%20Bank%20Data%22,columnName=%22Region%22,values=%7B%22North%20America%22,%22Europe%20%26%20Central%20Asia%22%7D);
and a link of that example in action.

How to run one feature file as initialization (i.e. before all other feature files) in cucumber-jvm?

I have a cucumber feature file 'A' that serves as setting up environment (data clean up and initialization). I want to have it executed before all other feature files can run.
It's it kind of like #before hook as in http://zsoltfabok.com/blog/2012/09/cucumber-jvm-hooks/. However, that does not work because my feature files 'A' contains hundreds of cucumber steps and it is not as simple as:
#Before
public void beforeScenario() {
tomcat.start();
tomcat.deploy("munger");
browser = new FirefoxDriver();
}
instead it's better to be able to run 'A' as a feature file as a whole.
I've searched around but did not find a answer. I am so surprised that no one has this type of requirement before.
The closest i found is 'background'. But that means i can have only one huge feature file with the content of 'A' as 'background' at the top, and rest of my test in the same file. I really do not want to do that.
Any suggestions?
By default, Cucumber features are run single thread in order by:
Alphabetically by feature file directory
Alphabetically by feature file name within directory
Scenario execution is then by order within the feature file.
So have your initialization feature in the first directory (alhpabetically) with a file name that sorts first (alphabetically) in that directory.
That being said it is generally a bad practice to require an execution order in your feature files. We run our feature files in parallel so order is meaningless. For Jenkins or TeamCity you could add a build step that executes the one feature file followed by a second build step that executes the rest of your feature files.
I have also a project, where we have a single feature file, that contains a very long scenario called Scenario: Test data with a lot of very long scenarios, like this:
Given the system knows about the following employees
|uuid|user-key|name|nickname|
|1|0101140000|Anna|annie|
... hundreds of lines like this follow ...
We see this long SystemKnows scenarios as quite valuable, so that our testers, Product Owner and developers have a baseline of what data are in the system. Our domain is quite complex, and we need this baseline of reference data for everyone to be able to understand the tests.
(These reference data become almost like well known personas, and are a shared team metaphore)
In the beginning, we were relying on the alphabetic naming convention, to have the AAA.feature to be run first.
Later, we discovered that this setup was brittle, and decided to use the following trick, inspired by the PageObject pattern:
Add a background with the single line Given(~'^I set test data for all feature files$')
In the step definition, have a factory to create the test data, and make sure inside the factore method, that it is only created once, like testFactory.createTestData()
In this way, you have both the convenience of expressing reference setup as a scenario, that enhances team communication, but you also have a stable test setup.
Hope this is helpful!
Agata

InstallShield: How can single custom actions be tested?

(I'm using InstallShield2012 V.18)
In setup.rul I defined a function per prototype declaration, included the file with the function definition and compiled it successfully (InstallShield compile).
Now I'd like to test this function (only).
I don't want to run the whole installation, not even test (Ctrl-T) because I want to avoid a complete re-build which takes too long time to do it often.
Is there a way to test only the custom function in InstallShield or per command line?
Not really although I can give you some tips.
Create a dummy feature with a release flag of DEVONLY.
Create a dummy component for that feature.
Create a ProductConfiguration that builds a single MSI with no EXE and a release flag of DEVONLY.
Building this production configuration will be very fast. A couple seconds on my laptop with an SSD. You can selectivly include other features through the use of release flags if you need certain components in order to setup the test environment for your CA.
Another strategy is to develop your CA in a test harness project and then transplant the code into your real installer when you know it all works.
Christopher, thanks for this fast reply. I have to put my answer here because commenting was restricted, because too long.
I also thought about using such a workaround but first wanted to avoid it if possible.
But ok, now I tried these steps, 1 and 2 no problem, but 3: InstallShield didn't allow me to configure a Product Configuration without Setup.exe in my .ism file (although we have IS2012 Pro).
Then I tried to do it in a Basic MSI Project (is that what you meant?), which really builds in very short time. And now I can see my scripting during Test Release, yeah :-)
To "transplant" my script now to the main ism I'm missing an export function for .rul files as it exists for custom actions, but there is only a import. So I will have to copy-paste while switching between ism files, but never mind.

Resources