Response to be Captured at Multiple Steps - performance-testing

I am currently testing a functionality which is built on SAP SRM with Fiori as its frontend, for which I need some support
Testing Tool - MF Load Runner
Protocol - SAP-Web
Test Case
After the initial login into the system, we need to Create an invoice in the system which will generate an excel. Need to download the same and alter with few document specific data.
After the data is modified, the excel needs to be uploaded again, which will automatically trigger save action.
At this point, we need to measure the time taken to upload excel and the time taken to save the document.
Follwed by this, i need to trigger the next step for calculation of few values which is a async process and its completion is only informed by a notification on the Fiori screen of the application.
After the calculation is done, i need to perform an another step. the completion of that step is also notified only by a notification.
We have tried by Ajax TruClient protocol as well, but it didn't worked out.
I am able to download the excel and save it locally but not able to edit the data. Any alteration of the data with the script is not happening.
Also, Can someone suggest any other method to capture the time taken at multiple steps.

Think about this architecturally. You have Truclient which is a browser. It is not a generic functional automation solution that exercises anything that is not a browser. You need to operate two apps, Excel and a Browser, so you need to move up the stack.
Don't time the stuff in Excel. This is 100% client based.
You have three options if you absolutely under every circumstance must run excel.
GUI Virtual User. This is a full blown automation client. In the Microfocus universe, this is QuickTest Pro. I can automate your browser. It can automate your excel. It can automate your Powerpoint. It can even automate solitaire. You will need a single full OS instance to run this
Citrix Virtual User. If you head this route you have options. You have Microfocus, Tricentis, Login VSI, and perhaps a handful of smaller players
Remote Desktop Virtual User. Similar to Citrix but fewer players. Microfocus is the Dominant solution.
If you leave the absolute non volatile requirement to run excel behind, then you can have a pre-prepared modified excel file for upload that represents the "changed" document and run this purely as an HTTP virtual user. TruClient is not going to operate your Windows Common Dialog Box (again, not a Browser) to allow you to pick a file.

Related

LoadRunner - Calling One Script from another

I am currently working on the load testing of SAP application where in I need to trigger an action in the SAP Frontend - Fiori Screen and the corresponding response time is available in the SAP Backend (i mean SAP GUI screen).
I need to capture like this because there is no request for the particular action is captured in either Fiddler or via Browser Dev-Tools.
My Question is - Currently I have 2 scripts - a. SAP Web b. SAP GUI.
Can i call the SAP GUI script from the SAP Web script to capture the response time and write it in the log file. ?
TIA
Can i call the SAP GUI script from the SAP Web script to capture the
response time and write it in the log file.
Not that I am aware of. If these two virtual users were the same architectural tier and language type, such as Winsock and HTTP, then you might consider a multi-protocol virtual user. But here you have multiple breaks in architecture making the jump from an HTTP user to SAP GUI. In general, you call down the stack and receive values back up the stack. An HTTP virtual user calling SAP GUI would be calling from the bottom of the Application layer to the top of it. This just doesn't work.
I think you have some tool issues to work out as well. It is off that you would "Write it in a log file" when a full transaction model exists to send a timing record to the testing tool for analysis with other timing records/transactions. You're not even taking advantage of in-tool capabilities using variants of lr_message(). This is before considering the lock contention of multiple users all attempting to write to the same file. Look up lr_output_message() & lr_set_transaction() - one of these is likely a far better fit than the path you are on.

Programatically move or copy Google Drive file to a shared Windows folder/UNC

I have a Google Apps Script that builds a new csv file any time someone makes an edit to one of our shared google sheets (via a trigger). The file gets saved off to a dedicated Shared Drive folder (which is first cleaned out by trashing all existing files before writing the updated one). This part works splendidly.
I need to take that CSV and consume it in SSIS so I can datestamp it and load it into a MSSQL table for historical tracking purposes, but aside from paying for some third-party apps (i.e. CDATA, COZYROC), I can't find a way to do this. Eventually this package will be deployed on our SQL Agent to run on a daily schedule, so it will be attached to a service account that wouldn't have any sort of access to the Google Shared drive. If I can get that CSV over to one of the shared folders on our SQL server, I will be golden...but that is what I am struggling with.
If something via Apps Script isn't possible, is there someone that can direct me as to how I might then be able to programmatically get an Excel spreadsheet to open, refresh its dataset, then save itself and close? I can get the data I need into Excel out of the Google Sheet directly using a Power Query, but need it to refresh itself in an unattended process on a daily schedule.
I found that CData actually has a tool called Sync which got us what we needed out of this. There is a limited-options free version of the tool (that they claim it's "free forever") that runs as a local service. On a set schedule, it can query all sorts of sources, including Google Sheets, and will write out to various destinations.
The free version has limited availability in terms of the sources and destinations you can use (though there are quite a few), but it only allows 2 connection definitions. That said, you can define multiple source files, but only 1 source type (i.e. I can define 20 different Google Sheets to use in 20 different jobs, but can only use Google Sheets as my source).
I have it setup to read my shared google sheet and output the CSV to our server's share. A SSIS project reads the local CSV, processes it as needed, and then writes to our SQL server. Seems to work pretty well if you don't mind having an additional service running, and don't need a series of different sources and destinations.
Link to their Sync landing page: https://www.cdata.com/sync/
Use the Buy Now button and load up the free version in your cart, then proceed to check out. They will email you a key and a link to get the download from.

Perforce change-submit trigger to run script on client

I figured I'd post here, after posting on SuperUser, since I want to get input from software developers who might have encountered this scenario before!
I would like to initiate a series of validation steps on the client side on files opened within a changelist before allowing the changelist to be submitted.
For example, I wish to ensure that if a file is opened for add, edit, or remove as part of a changelist, that a particular related file will be treated appropriately based on a matrix of conditions for that corresponding file:
Corresponding file being opened for add/edit/remove
Corresponding file existing on disk vs. not existing on disk
Corresponding file existing in depot vs. not existing in depot
Corresponding file having been changed vs. not having been changed relative to depot file
These validation steps must be initiated before the submit is accepted by the Perforce server. Furthermore, the validation must be performed on the client side since I must be able to reconcile offline work with the copies on clients' disks.
Environment:
Perforce 2017.2 server
MacOS and Windows computers submitting to different branches
Investigative Avenues Already Covered
Initial design was a strictly client-side custom tool, but this is not ideal since this would be a change of the flow that users are familiar with, and I would also have to implement a custom GUI.
Among other approaches, I considered creating triggers in 2017.2; however, even if I were to use a change-content trigger with all the changelist files available on the server, I would not be able to properly perform the validation and remediation steps.
Another possibility would be using a change-submit trigger and to use the trigger script variables in 2017.2 to get the client's IP, hostname, client's current working directory, etc. so that you could run a script on the server to try to connect remotely to the client's computer. However, running any script on the client's computer and in particular operating on their local workspace would require credentials that most likely will not be made available.
I would love to use a change-submit trigger on the Perforce server to initiate a script/bundled executable on the client's computer to perform p4 operations on their workspace to complete the validation steps. However, references that I've found (albeit from years ago) indicate that this is not possible:
https://stackoverflow.com/a/16061840
https://perforce-user.perforce.narkive.com/rkYjcQ69/p4-client-side-submit-triggers
Updating files with a Perforce trigger before submit
Thank you for reading and in advance for your help!
running any script on the client's computer and in particular operating on their local workspace would require credentials that most likely will not be made available.
This is the crux of it -- the Perforce server is not allowed to send the client arbitrary code to execute. If you want that type of functionality, you'd have to punch your own security hole in the client (and then come up with your own way of making sure it's not misused), and it sounds like you've already been down that road and decided it's not worth it.
Initial design was a strictly client-side custom tool, but this is not ideal since this would be a change of the flow that users are familiar with, and I would also have to implement a custom GUI.
My recommendation would be to start with that approach and then look for ways to decrease friction. For example, you could use a change-submit trigger to detect whether the user skipped the custom workflow (perhaps by having the custom tool put a token in the change description for the trigger to validate), and then give them an error message that puts them back on track, like "Please run Tools > Change Validator, or contact wanda#yourdomain.com for help"

Running (& compiling) untrusted user code

I want to create a application that contains a feature that allows users to submit code and the server will compile and run it, similar to Ideone & Spoj. How do I do this securely in a scalable manner?
Partial Solutions I'm aware of:
IDEA 1 - 3rd Party Services
The Sphere Engine. However this costs a LOT of money!
I'm not aware of any open source application I can run on my server to achieve this, or a cheaper alternative. Please correct me if i'm wrong.
IDEA 2 - VM
This would be the next most sensible choice. However, I'm unsure how to implement it. For example let's say I created a VM and started to run the user's code. This would restrict damage on MY system, but not the damage on the VM, which other users would have to use. Does that mean I have to create a new VM each and every time I want to compile and run user's code (which clearly is not scalable - correct me if I'm wrong.
Having not set up a thing, I assumed that services like TravisCI (which compiles code and runs it under test cases you provide), have a base virtual machine image, which boots up and processes your code. The next user to come along gets a separate VM booted from the same base image, your changes aren't stored.
So inside the VM, the user code can do whatever. All of its effects, except stuff written to the console will be erased at the end of the time limit.

Extract SAS Enterprise Guide into Unix Server runnable batch?

We have built a project in Enterprise Guide for the purpose of creating a easy understandable and maintainable code. The project contain a set of process flows which run should be done in specific order. This project we need to run on a Linux Server machine, where the SAS Metadata Server is running.
Basic idea is to extract this project into SAS code, which we would be able to run from command line in Linux as a batch job.
Question 1:
Is there any other way to schedule a batch job in Linux-hosted SAS Server? I have read about VBS scripting for scheduling/running batch jobs, but in order this to be done on Linux Server, a installation of WINE is required, which on a production machine which already runs a number of other important applications, is almost completely out of question.
Is there a way to specify a complete project export into SAS code, provided that I give the specific order of running process flows? I have tried out ordered list, which is able to make you a list of tasks to run in order (although there is no way to choose a whole process flow as a single task), but unfortunately, this ordered list itself is later not possible to be exported as a SAS code.
Current solution we do is the following:
We export each single process flow of the SAS EG project into SAS code, and then create another SAS code with %include lines to run all the extracted codes in order that we want. This is of course a possible solution, but definitely not the most elegant one.
Question 2:
Since I don't know how exactly the code is being exported afterwards, are there any dangers I should bear in mind with the solution I chose.
Is there any other, more elegant way?
You have a couple of options from what I'm familiar with, plus I suspect if Dom happens by he'll know more. These answers are based on EG 6.1, which is the current version (ships with 9.4); it's possible some of these things may not be true in earlier versions.
First, if you're running Enterprise Guide from Windows, you can schedule the job locally (on any Windows machine with Enterprise Guide). You're not scheduling the server directly, you schedule Windows to launch an EG process that connects to the server and does its magic. That's how I largely interact with scheduling (because I have a fairly 'light' scheduling need).
Second, from the blog post "Four Ways to Schedule SAS Tasks", options 3 and 4 may be helpful for you. The SAS Platform Suite is designed in part for scheduling, and the options using SAS Management Console to schedule via operating system tools, are both very helpful.
Third, you may want to look into SAS Stored Processes, which should be schedulable. A process flow can be converted into a stored process.
For your specific questions:
Question 1: When you export a process flow or a project, at least in 6.1 you have the option to change the order in which the programs are exported. It's manual, so it's probably not perfect, but it does give you that option. (The code seems to be by default in creation order, which is sub-optimal.) The project export does group process flows together, but you don't have the option of manipulating the order of process flows - you have to move each program around, which would be tedious. It also of course gives you less flexibility if you need to multiply run programs.
Question 2: As Stig Eide points out in comments, make sure your System Option LRECL is > 256 (the default) or you run some risk of code being cut off. In 9.2+ this is modifiable; just place LRECL=32767in your config.sas file.

Resources