Coded Ui-Data driven tests - coded-ui-tests

I am automating a test scenario which runs on different test inputs.The inputs are passed from a CSV file or MTM. During the test run,the first iteration went through successfully but the 2nd iteration fails for the same flow for which the first has gone through successfully.
Could anyone say the cause for this problem,why is it happening? I thought it would be due to the objects which are set to some value(during the first run) and not initialized to null in the second run.So when the next run happens it fails on some controls saying "Unable to find control" on some objects.But the tool recognized it successfully in the first run.If this is the problem kindly help us on the solution asap.Thanks in advance!!
regards
Amsaveni

You will get this error if your test is running but the controls or app has not yet been loaded. Say Round 1 is finished and round 2 has begun. If you are not starting the app from a same starting point or not waiting for the control, then you will get this exception.
Verify that after each test has completed you app starts in the same state
Verify that you are waiting for your controls
Hand code and debug your test

Related

Cypress can't run multiple test

I have a weird behaviour in cypress lately
When i debug, the issue is that the test starts with this url account.domain.com
as the test goes on, it naturally move to app.domain.com.
All is good for the first test.
The second one uses the same logic, start with account.domain.com ...
While the first one ended the test with the url app.domain.com, it seems that cypress is unable to load account.domain.com in the next test, it doesn't show any error, it just keep loading
Do you have any solution for this please ?
I'm using cucumber by the way

Show progress in a azure-pipeline output

so I have my computer set up as an agent pool in azure-devops. I'm creating a test for latency so the developers can use it in their CI, the script runs in python and test various points in a system I have set up for the company which is connected to the cloud, it's mainly for informative purposes. When I run the script I have to wait some time, so the system I have connected goes through its normal network cycle inspecting all the devices in the local network, not very important for que question, however when I'm waiting I show in the terminal a message with "..." going from "." to ".." to "...", just to show the script didn't crash or anything.
the python code looks like this and works just fine when I run it locally:
sys.stdout.write("\rprocessing queue, timing varies depending on priority" + ("."*( i % 3 + 1))+ "\r")
sys.stdout.flush()
however the output shown in the azure pipeline shows all of the lines without replacing them. Is there a way to do what I want?
I am afraid showing progress is not supported in azure pipeline. Azure pipeline log console isnot user interactive. It just capture the agent machine terminal outputs.
You might have to use a simpler way to indicate that the script is now executing and not finished yet. For simple example:
sys.stdout.write("Waiting for processing queue ..." )
You can report this problem to microsoft development team. Hope they find a way to fix this in the future sprint.
I have seen it once but never actually used it myself, this can be done in both bash and PowerShell, not sure if this works inside a Python script, you might have to call bash/PowerShell from within your Python script.
It is possible to set a progress value in percent that is visible outside of the log, but as I understand it this value is step-spefific, meaning it only applies to the pipeline step you're currently in. You could drag the numeric value (however many percent) along into the next step, but the progress counter would then again show up in the next step. I believe it is not possible to have a pipeline global display of a progress.
If you export a progress value it will show up beside the step name in the left hand side step list.
This setting of a progress (also exporting one variable from one step to another, which is typically done that way) can be done by echoing special logging commands. There's a great description to be found here: Logging commands
What you want to do is something just as it is shown as an example on the linked page:
echo "Begin a lengthy process..."
for i in {0..100..10}
do
sleep 1
echo "##vso[task.setprogress value=$i;]Sample Progress Indicator"
done
echo "Lengthy process is complete."
All of these special logging commands start with ##vso[task... The VSO is a relict to the time when Azure DevOps was called Visual Studio Online.
There are a whole bunch of them, but most of the time what you really need is exporting variables from one build step context to another, which is done with ##vso[task.setvariable]value

How to stop the whole test execution but with PASS status in RobotFramework?

Is there any way I can stop the whole robot test execution with PASS status?
For some specific reasons, I need to stop the whole test but still get a GREEN report.
Currently I am using FATAL ERROR which will raise a assertion error and return FAIL to report.
I was trying to create a user keyword to do this, but I am not really familiar with the robot error handling process, could anyone help?
There's an attribute ROBOT_EXIT_ON_FAILURE in BuiltIn.py, and I am thinking about to create another attribute like ROBOT_EXIT_ON_SUCCESS, but have no idea how to.
Environment: robotframework==3.0.2 with Python 3.6.5
There is nothing built-in to support this. By design, a fatal error will cause all remaining tests and suites to have a FAIL status.
Just about your only choice is to write a keyword that sets a global variable, and then have every test include a setup that uses pass execution if to skip the test if the flag is set.
If I understood you correctly, you need to pass the test execution forcefully and return green status for that test, is that right? You have a built in keyword "Pass Execution" for that. Did you try using that?

Auto-correlation callback function issue - loadrunner

I'm working in new application written in Siebel 8.1, issue appears when I'm trying to replay script and I can't handle that.
Replay Output:
Error -27086: Auto-correlation callback function
"flCorrelationCallbackParseWebPage" failed (rc=1) for parameter
"Siebel_Parse_Web_Page40"
web_reg_save_param("Siebel_Parse_Web_Page40",
"LB/IC=",
"RB/IC=",
"Ord=1",
"Search=Body",
"RelFrameId=1",
"AutoCorrelationFunction=flCorrelationCallbackParseWebPage",
"AutoCorrelationDll=LrwiSiebelCorrelationWrapper",
LAST);
I have done all steps for prepare record options from: http://software-qe.blogspot.se/2008/01/siebel-7x-record-and-replay-for.html
I'm using Loadrunner 11.52 (Siebel Web protocol), IE8.
We've been using the autocorrelation library for quite a few years on my team and we see this a lot. Unfortunately, it's not an easy problem to diagnose.
First I would check your test results and your VUser log to see if something happened before the autocorrelation failed. (Make sure your logging is set to parameter substitution in runtime settings).
Check your parameter files for extra spaces, commas, etc. Sometimes I've seen that error right after it rejects something about your parameter file.
Worst case scenario, your script is corrupted and you'll have to start over. We've gotten in the habit of making frequent backups of our scripts just because of this issue. Usually, we'll be able to start from our backup and continue or create a new script and paste the old code in. Autocorrelation error "magically" goes away with the same code in a new script.
If auto(magical)correlation does not work then use manual correlation.
Record twice with same data: Compare. You will find session, state and time data.
Change the credentials: Re-record. Compare. You will find credential related correlation
Change the business record but keep the same business process. Re-Record. You will find the business related correlation.
Do not expect autocorrelation to provide a magical working script. You have about a 0.0001% chance of that happening without LoadRunner script development intervenetion.

How to fix me51n user exit EXIT_SAPLMEREQ_010?

I have problem with me51n. I have an include in EXIT_SAPLMEREQ_010 that has a bunch of codes which we use it to receive errors. The problem of mine is;
-When i run me51n with required datas(mat. number, quantity, etc.) I get some errors which also includes the error that I'm expecting on the first time, however when i terminate me51n and run it again with the same exact data, i dont get my error. I have debugged it and put a break point on my include in EXIT_SAPLMEREQ_010 and it never gets to my breakpoint on the second run. (It gets to the breakpoint on the first run but not the second one).
I dont know how but with the same material it works fine later again at the first time but still on the second time i cant get the error again.
Can anyone please help me on this?
Basically problem was the developement system from the beginning (sigh -_-). While having other errors rather than my errors, i assume my error gets stuck sometimes and never pop up because of the standard SAP procedure. So in test system (QA) me51n works perfectly when I'm just trying to have only my error.
Thanks for anyone who actually tried to help and I hope this might be useful sometime for someone in future.
Talha

Resources