How to generate a random string in Rational Performance tester? - performance-testing

I want to do performance testing using rational performance tester where most input fields in the test are unique, so I have to generate random values to use it during run time. I don't want to use excel.
Can any one help.

You can use the built-in data source - Random Number. The substitution wizard provides option to set minimum and maximum values.
The Random Number source can be found using Substitute->Select Data Source and then following the options in screenshot below:

Related

How to pass different data values to a same step from excel to a cucumber steps

I am trying to use the step "#when user enters the field value as "something"" for 10 Scenarios. each test case should pass different values. However steps cannot be duplicated in step defenition.
I have an excel hashmap Utility that can red the values from excel based on key word (Scenario name)
Now how can I get the Scenario name from feature file and get the data specific to each TC so that, value wrt to that particular TC will be updated and thus also avoiding duplication of the step
You are basically looking at a Scenario outline, which can help you data drive the same scenario. One option can be that you put a step as below:
Scenario Outline: Test multiple inputs
Given some blah
When user enters the field value as "scenarioData""
Examples:
|scenarioData|
|data1|
|data2|
|data3|
Now in excel your key is data1, data2 and data3. In the step definition
#When("user enters the field value as {word}")
public void printMeaning(String dataIdentifier) {
//Read the excel, get the value for dataIdentifier
//Input data
}
You should understand what the meaning of each different value is. If you don't then either you are
repeating the same test (which is pointless) OR
running tests that you don't understand (which isn't a good thing)
By giving each value a name which describes WHY you are testing it you should be able to write a better scenario. Instead of using a single step to enter any value, you use a specific step to enter specific values. Then you don't need any external storage of your values.
e.g.
You are testing passwords.
Your current approach puts lots of passwords in a file then has a single step input different passwords for different reasons.
My approach uses a single descriptive step for each password e.g.
When I input a valid password
When I input a password that is too short
When I input a password with a non UTF-8 character
When I input a password without a number
...
Now if you use these in several scenarios, you get simpler cukes that are of much higher quality in that they document WHY your application behaves the way it does e.g.
Scenario: Short password
Given I am registering
When I provide short password
Then I should be told I need to provide a longer password
Step duplication is not important in Cucumber. The important thing is to write simple scenarios that clearly express behaviour in terms of WHAT it is, and WHY its important, without getting into HOW its implemented.

Appened Query and Filter Variables to Output in Oracle OBIEE

I frequently use OBIEE to create custom analyses from predefined subject areas. I'm often making extensive use of filters as I'm typically pulling data on a very specific issue from a huge dataset. A problem I have repeatedly come across is trying to recreate a previous analyses complete with the filter variables.
Obviously if it's something I know I'll come back too, I'll save the query. But that's often not the case. Maybe a month or two will go by and I'll need to generate a new version of a previous report to compare with the original and I end up not being able to trust the output because I'm not sure that it's using the same variables.
Is there a way to append the query and filter variables to the report itself so it can be easily referenced and recreated?
Additional info.
* I almost exclusively output the data from OBIEE as a .csv and do most of the work from excel pivot tables I save in .xlsx.
* I'm not a DBA.
a) You can always save filters as presentation catalog objects instead of clicking them together from zero every single time. Think LEGO bricks for adults. OBIEE is based on the concept of stored and reusable objects.
b) You will never have your filters in your CSV output since CSV is a raw data output and not a formatted / graphical one. If you were using a graphical analysis export you could always add the "Filters View" to your results.
Long story short since you're using OBIEE as a data dump tool and are circumventing what the tool is designed to do and how it is supposed to function you're constraining yourself in terms of the benefits and the functionality you can get from it.

Should I Hardcode 81000 String Constants?

I have a list of 81000 (provinces, cities, district etc) to be populated as some input drop down options in a form when user is signing up for an account.
Should I have a hardcoded map for this number of strings? Do I need a database?
What is the maximum number of const strings you think is acceptable? I know 100 constants should be no problem being hardcoded, but what is the borderline when we need a database?
In my experience, the project like this should be a frontend app + Redis for cache (optional) + DB (your choice)
If you're going to build your app with this 80k rows inside the bundle size will increase ridiculously. And the speed of search and sorting too.
I would suggest you definitely want to consider a database (and probably also some sort of caching) for this use case. The advantages will be:
Only loading the relevant data into memory.
Breaking the data into logical parts.
Being able to map the relationships between the data (eg cities within a province) in a logical way.
Points 2 and 3 also make maintenance and updates a lot more straightforward.
It depends if you are concerned about the fast loading page don't add 81000 String Constants. Or else if you don't have any issue with loading you can add 81000 String Constants.
But if you still want to add 81000 String Constants, add them in a different js file and load it in asynchronous manner when the website is completely loaded.

Alloy Analyzer 4.1.10- Generting Large datasets

I am new to Alloy. I want to generate large data sets with the help of Alloy, but I am not sure how to do it.
I am able to generate data in small amount. But want to generate large amount of data which is unique.
I am using run command currently. And its perfectly fine, if I get data in format of Name_0, Name_1.. (for First Name)
Any tip in this regard will be a great help.

What code could be used as a string aggregator for Sybase? (Like Oracle's stragg)

In my travels in Oracle, the 'stragg' function, or 'String Aggregator' was life-saving when I had to create dynamic SQL queries on the fly.
You can read up about it here: http://www.oratechinfo.co.uk/delimited_lists_to_collections.html
The basic use of it was:
select stragg(fruit) from food;
fruit
-----------
apple,pear,banana,strawberry
1 row(s) returned
So simple to use, concatenating chr(13) turned it into a long list, and selecting information from system tables gave a 5 minute solution to dynamically generated SQL, e.g. auditing triggers.
Now I've been charged with transferring oracle functionality related to auditing into Sybase, and a function similar to Stragg would be ideal for this purpose.
E.g.
select #my_table = 'table_of_fruit'
select 'insert into '+#mytable+'_copy (' +char(10)
+ stragg(c.name) +char(10)
+ 'select '
+ stragg('inserted.'+c.name) + char(10)
+ 'from '+#mytable
from syscolumns c
where objectid(#mytable) = c.id
------------------------------------------
insert into table_of_fruit_copy
(fruit, sweetness, price)
select fruit, sweetness,price
from inserted
Done. Simple.
Except I don't know how to get a string-aggregation function working in Sybase.
Does anyone know of an attempt to do this kind of thing, or code that could work the same as stragg that could be used in this way?
The alternative at the moment is printing code based on complex cursors and such (sample LOC: 500), or select statements combining static strings and columns from user tables (sample LOC: 200). Stragg would severely reduce the complexity of this code, and would be a great deal of help in the future (sample LOC: who knows, maybe 50?)
p.s. I'm calling these selects through a shell script then piping them to file, then running the file through iSQL. Not the nicest solution, but it's better than the alternatives.
There are three separate answers
Question
You have made comments about simplicity, which need to be addressed before we get to the solution.
It is a common requirement to be able to take a delimited list of values, say A,B,C,D, and treat this data like it was a set of rows in a table, or vice versa
This one of the Top Ten Worst Programming Practices I read about recently.
In general, Sybase types tend to be somewhat more academically and Relationally qualified than Oracle types, so we simply do not do that sort of thing in SybaseLand or DB2Land.
In 20 years of working with Sybase, I have had to code that as part of my project just once, and that was for non-technical Auditor who loaded the result set into MS Access.
On the other hand, I have had to code that at least 12 times, when producing text files for importation into Oracle databases (fulfilling external requirements is outside my project, but I satisfy any such requirement free). Obviously the target databases were sub-standard and non-relational (loading a column with more than one datum breaks 1NF, and creates Update Anomalies), which is typical of what Oracle types have to do to get some speed.
Therefore, no, it is not simplicity, at least in the sense of that principle. It is by definition, complexity.
Your reference to "arrays" is incorrect. All commercial dbms handle arrays, according to the ISO/IEC/ANSI SQL (STRAGGR and LIST operators are non-standard SQL, therefore not SQL). Sybase is very strong in processing arrays. If it was an array, you would not need special hand coding to handle it (and you do, as per your question). This is not an array, there is no definition to the cells. This is a single concatenated scalar string.
Pivoting is an entirely different process, which uses set-processing; it does not require row-processing. (I understand on good authority, that Oracle is hopeless at scalar subqueries, and thus Oracle people are used to writing them as [very inefficient] joins or inline views, and then filtering: all that can be elevated to set-processing via scalar subqueries, and it will perform much faster. Particularly your Pivots.)
Even the author in your link posts as follows. Please familiarise yourself with the caveats:
It's as simple as this: If you want to have a system with no logical limitation in the number of data elements passed to a given process, then forget the following mechanisms! They are simply the wrong way to approach the problem.
Therefore, know whatever you are doing is sub-standard, non-relational, and limited; and go ahead with your eyes open. No use pretending that: it will not break; it is not limited; it is an "array"; or that Sybase doesn't have a neat little function that Oracle has. Any professional will see through all that. And if the string length is exceeded, for God's sake send some indicator back to the caller ("!Exceeded" in the string) identifying that condition.
Essentially you are turning the set-processing engine on its head, and forcing it into row-processing mode, so it will be very slow. A WHILE loop is distinctly faster than a cursor, but both are in the same class, row-processors.
The alternative at the moment is printing code based on complex cursors and such
What 200 or 500 LoC ? It is possible I am missing something, but my code is the same few lines of code identified under "Using a Table Function" in your link. Maximum 20, if you count nice formatting; the loop; initialisation; error handling. There is nothing "complex" about it. Do the exact reverse to cancatenate a single string from multiple rows. We use stored procedures for this (which oracle does not have, really, PL/SQL is a different animal). If you have ASE 15.0.2 or greater, you can use a User Defined Function, which you can then use in place of a column. Stored procs are better for true arrays.
the concatenation operator in Sybase is the plus sign. For reversal (decomposing the CSV string) you need CHARINDEX and SUBSTRING functions
You may need the Function Reference Manual, if for nothing else, to avoid writing code where we have functions.
Likewise, we do not have a RANK() function. We are quite happy with the 4 lines of code requires for the subquery. It is only required for Oracle because subqueries are crippled.
Ok, I have answered your question, Now to address the approaches.
You will be aware that code using Oracle Extensions to the SQL standard will need to be changed.
Sybase is way more automated than Oracle; if you familiarise yourself with its feature set, in many instances, you can get the same result (as you did in Oracle) without writing any code. Writing code-for-code blocks is the chain gang, rock-breaking method of building roads, in the context of bulldozers. Even if your company had good reason to use that method, you need to the aware that features work quite differently, eg. triggers, which is why I am posting so much detail.
Another issue that will annoy you is that Oracle isn't really ANSI SQL compliant (stretches the definitions in many places, in order to appear to be compliant), and Sybase, given its customer base, is rigidly SQL compliant. So in addition to the same function working differently, or in a different deployment, you need to be aware that code changes may be required to elevate Oracle code to ANSI compliance levels, just to execute on an ANSI SQL compliant platform.
I am not sure if you are trying to write code for the content of a trigger, or if you are trying to capture the changes to a database. I will provide both answers.
Auditing
Capture Changes to Database
We have an very robust, fast and configurable Auditing subsystem, fit for high volumes and banking level auditing requirements. Get your DBA to setup the sybaudit (separate) database, and to configure exactly what changes need to be captured. This facility will perform much faster than any code you or I can write in a trigger (as much as 100 times faster than your row-by-processing required for the above, as it is executed within the engine, within your executing thread). And of course the setup time is a fraction of your coding time.
Triggers
Again, I am not sure exactly what you are trying to achieve, but assuming you want to copy every insert to some table to a COPY of that table (inside the Trigger), that example code you have provided will not work (and I am not counting syntax issues).
Speaking to your example, you need to do way more work, to deal with the different datatypes; column sizes; precisions; scale; etc. And perhaps the UPDATE() function to identify which columns have changed (for an UPDATE trigger of course). If all you are trying to do is convert the various datatypes to strings, check the CONVERT() function.
Triggers are transactional.
Never place row-processing code in a Trigger (it will strangle the table)
You can't place Dynamic SQL in a Trigger.
But in Sybase even that is not necessary. Refer to the User Guide, chapter 19 is devoted to Triggers, with several variations, and examples. Inside the trigger, you should be able to simply:
INSERT table_copy
SELECT column_list -- never use * unless you want the db fixed in cement
FROM inserted
If you are trying to copy the inserts to all tables into one Audit table, then beware. Then I understand your example a little bit more. You will be forcing a highly Symmetric Muli-Threading server (oracle is not a server in the architecture sense) into single-threading through your table. Auditing is multi-threaded.
Last, the use of manual methods of any kind is not required, so if you could expand a bit more on your PS, what the requirement you are trying to fulfil is, I can identify the programmatic method for you. It appears you are trying to use the PL/SQL approach (which is very limited).
Just use the LIST() function. It's a direct replacement for stragg() function. Example:
SELECT LIST(state, ', ') FROM cities
Result:
name
CA, CA, MA, NY

Resources