LoadRunner - Calling One Script from another - performance-testing

I am currently working on the load testing of SAP application where in I need to trigger an action in the SAP Frontend - Fiori Screen and the corresponding response time is available in the SAP Backend (i mean SAP GUI screen).
I need to capture like this because there is no request for the particular action is captured in either Fiddler or via Browser Dev-Tools.
My Question is - Currently I have 2 scripts - a. SAP Web b. SAP GUI.
Can i call the SAP GUI script from the SAP Web script to capture the response time and write it in the log file. ?
TIA

Can i call the SAP GUI script from the SAP Web script to capture the
response time and write it in the log file.
Not that I am aware of. If these two virtual users were the same architectural tier and language type, such as Winsock and HTTP, then you might consider a multi-protocol virtual user. But here you have multiple breaks in architecture making the jump from an HTTP user to SAP GUI. In general, you call down the stack and receive values back up the stack. An HTTP virtual user calling SAP GUI would be calling from the bottom of the Application layer to the top of it. This just doesn't work.
I think you have some tool issues to work out as well. It is off that you would "Write it in a log file" when a full transaction model exists to send a timing record to the testing tool for analysis with other timing records/transactions. You're not even taking advantage of in-tool capabilities using variants of lr_message(). This is before considering the lock contention of multiple users all attempting to write to the same file. Look up lr_output_message() & lr_set_transaction() - one of these is likely a far better fit than the path you are on.

Related

Response to be Captured at Multiple Steps

I am currently testing a functionality which is built on SAP SRM with Fiori as its frontend, for which I need some support
Testing Tool - MF Load Runner
Protocol - SAP-Web
Test Case
After the initial login into the system, we need to Create an invoice in the system which will generate an excel. Need to download the same and alter with few document specific data.
After the data is modified, the excel needs to be uploaded again, which will automatically trigger save action.
At this point, we need to measure the time taken to upload excel and the time taken to save the document.
Follwed by this, i need to trigger the next step for calculation of few values which is a async process and its completion is only informed by a notification on the Fiori screen of the application.
After the calculation is done, i need to perform an another step. the completion of that step is also notified only by a notification.
We have tried by Ajax TruClient protocol as well, but it didn't worked out.
I am able to download the excel and save it locally but not able to edit the data. Any alteration of the data with the script is not happening.
Also, Can someone suggest any other method to capture the time taken at multiple steps.
Think about this architecturally. You have Truclient which is a browser. It is not a generic functional automation solution that exercises anything that is not a browser. You need to operate two apps, Excel and a Browser, so you need to move up the stack.
Don't time the stuff in Excel. This is 100% client based.
You have three options if you absolutely under every circumstance must run excel.
GUI Virtual User. This is a full blown automation client. In the Microfocus universe, this is QuickTest Pro. I can automate your browser. It can automate your excel. It can automate your Powerpoint. It can even automate solitaire. You will need a single full OS instance to run this
Citrix Virtual User. If you head this route you have options. You have Microfocus, Tricentis, Login VSI, and perhaps a handful of smaller players
Remote Desktop Virtual User. Similar to Citrix but fewer players. Microfocus is the Dominant solution.
If you leave the absolute non volatile requirement to run excel behind, then you can have a pre-prepared modified excel file for upload that represents the "changed" document and run this purely as an HTTP virtual user. TruClient is not going to operate your Windows Common Dialog Box (again, not a Browser) to allow you to pick a file.

Is there a way to automatically test interactive command line (console) Linux app?

I am one of the developers of a console two-pane file manager for Linux (this is a port of Far Manager, called far2l), the application interface resembles Midnight Commander. I am faced with the need to implement automated testing. Can you please tell me which application or framework can be used for this?
I need the ability to write some scripts containing a sequence of keystrokes that will be transmitted to the console application (the ability to specify delays between emulated keystrokes also needed), as well as the ability to automatically analyze application interface drawn in the console, for example, for the presence of certain strings. And some kind of a framework to run a number of such tests automatically and generate testing reports.
Most console application testing tools I could find (like "cram", "cli-unit", "aruba", or "exactly") unfortunately don't seem to be designed specifically for testing interactive applications.

How to download files or custom logs written as part of a test in performance center?

I have a script which on running creates an ID and I want to have the list of IDs from the 30 mins test executed with this script. I want to run the test on performance center which uses LG to which I don't have access to? Basically how to get the files that are created by the script using functions like fopen fprintf after every test executed in performance center?
lr_output_message()....this will send the data to the controller for tracking. You could also use virtual table server or a queue service running in a cloud provider
realistically, you do not want to write data to the local load generator. You will be turning the local drive into a bottleneck for the entire test as multiple users compete for access to the write head. This is also why no, or log on error, is recommended for execution.

Generate background process with CLI to control it

To give context, I'm building an IoT project that requires me to monitor some sensor inputs. These include Temperature, Fluid Flow and Momentary Button switching. The program has to monitor, report and control other output functions based on those inputs but is also managed by a web-based front-end. What I have been trying to do is have a program that runs in the background but can be controlled via shell commands.
My goal is to be able to do the following on a command line (bash):
pi#localhost> monitor start
sensors are now being monitored!
pi#localhost> monitor status
Temp: 43C
Flow: 12L/min
My current solution has been to create two separate programs, one that sits in the background, and the other is just a light-weight CLI. The background process listens to a bi-directional Linux Socket File which the CLI uses to send it commands. It then sends responses back through said socket file for the CLI to then process/display. This has given me many headaches but seemed the better option compared to using network sockets or mapped memory. I just have occasional problems with the socket file access when my program is improperly terminated which then requires me to "clean" the directory by manually deleting the socket file.
I'm also hoping to have the program insure there is only ever one instance of the monitor program running at any given time. I currently achieve this by capturing my pid and saving it to a file which I can look for when my program is starting. If the file exists, I self terminate with error. I really don't like this approach as it just feels too hacky for me.
So my question: Is there a better way to build a background process that can be easily controlled via command line? or is my current solution likely the best available?
Thanks in advance for any suggestions!

Interactive website for linux server

creation of a website through which can access Linux server and where can execute some operations like executing scripts, firing up some commands. need some expert guidance with some concepts.just guide me how can i achieve through, i have goggled a lot unable to get the proper concepts or methods. or is it even possible.
Have you considered using something along the lines of VNC or SSH? As JNevill pointed out, either of these methods would be infinitely more secure. Also, consider using something like cron for scheduled jobs.
However, sometimes a webpage for running a program on a server may be acceptable. e.g. IoT project. To do this, you would setup an API page using a back-end language like PHP (recommended if you're using Linux). In the back-end language you would check your user credentials, and then run the command.
Some guidelines in doing this:
Never allow commands to be entered on the webpage, only allow tasks to be completed using controlled inputs like buttons, selectboxes and sliders. i.e. a button to get an Arduino to close your garage door, or a slider to dim/brighten a light bulb, or a button to start a program to index something, etc.
None of these "command buttons" should ever do anything harmful. i.e. delete a folder or file.

Resources