How do I find the duration of a user in Karate-Gatling? [duplicate] - performance-testing

This question already has answers here:
Is there an option for us to customize and group the test scenarios in the statics section of the Karate Gatling Report?
(2 answers)
Closed 1 year ago.
I am using Karate-Gatling combo for perormance testing. In 0.9.6, the Gatling logs included a thread index, from which I could determine how long each user took to finish the scenario. The logs no longer contain this information in 1.0.1.
Is there a way to get the information about time it took to process a single user in 1.0.1? Or am I stuck with some sort of statistics of Duration*ConcurrentUsers/TotalUsers?

This is news to me. There have been some code contributions and no one else has reported this. The Gatling version has also been upgraded, and perhaps it is no longer supported. The best thing is for you to help us understand what should be changed and do some investigation. We have a nice developer guide, so you should be able to easily build from source.
I suspect it is this commit: https://github.com/intuit/karate/pull/1335/files
If if we don't get help, it is unlikely we resolve this as no one else has reported this.

Related

Can Gatling execute more than one Karate scenario from a feature file? [duplicate]

This question already has an answer here:
karate-gatling: how to simulate a karate feature except those scenarios tagged with #ignore
(1 answer)
Closed 1 year ago.
I am using Karate-Gatling combo to test backend. I have one test where I would like to
Update some info about account
Upload multiple files (one by one)
Save changes
Simplest way to simulate this would be to have Scenario for the 1. and 3. step, and have Scenario Outline for step 2. with all the different files in Examples:. All in the same .feature file.
However when I run this with Gatling, only the first scenario in the list gets executed. Is there a way to make Gatling run the others as well? I suppose that there could be a trick with dynamic outlines, but I'm asking if I'm missing something obvious.
Do you want to execute them in sequence or in parallel? Remember scenarios are supposed to run in parallel.
Could you provide extracts of the source code?
Also, would be good to know the Karate version, considering the recent 1.0 release.
All the Scenario-s in the feature-file should be executed. Please check if maybe the first Scenario is exiting with an error.
Otherwise this is a bug. Please then follow this process: https://github.com/intuit/karate/wiki/How-to-Submit-an-Issue

Life ray life cycle method in JSR-186 and 168

I am newbie in Life-ray even portal application. I just started study last 15 days, and yesterday I attend an interview, they ask me following question, some of the question I answered also, but help me to understand about following questions.
What environment have you worked.
Answer: I was using eclipse IDE and plugin SDK(as per them it was not correct,probably I am wrong)
If we have a Page containing a text field and button, what will happen if I click on button.
Answer: I asked it depends what framework we are using struts or spring. then they asked if you are not using any framework then,
Then i said Life-cycle methods of portlet( init, processAction, render, destroy)
as per them this was not correct also.
Please help me to understand the correct answer.
Is there any thing specific to life-ray portal, I am quite confused.
And is there any life cycle methods difference between JSR-186 and JSR-268.
Thanks
What environment have you worked
That entirely depends one what they mean by environment as it could be programming environment (IDE, other tools, languages) or it could work environment (teams, methodologies, etc). Could you elaborate more on what they asked?
If we have a Page containing a text field and button, what will happen if I click on button
Again this is really open ended, and depends on what code is written. However we assume that the button is linked to a form, who's action is to invoke the Portlets process action method, then that's would would happen. After that it would probably enter the render phase again, but it depends what code has been written inside the portlet!
I personally think their questions are a bit generic, and they may have been looking for you to narrow down their questions more.
What do other people think?

Any good strategies for dealing with 'not reproducible' bugs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
Very often you will get or submit bug reports for defects that are 'not reproducible'. They may be reproducible on your computer or software project, but not on a vendor's system. Or the user supplies steps to reproduce, but you can't see the defect locally. Many variations on this scenario of course, so to simplify I guess what I'm trying to learn is:
What is your company's policy towards 'not reproducible' bugs? Shelve them, close them, ignore? I occasionally see intermittent, non reproducible bugs in 3-rd party frameworks, and these are pretty much always closed instantly by the vendor... but they are real bugs.
Have you found any techniques that help in fixing these types of bugs? Usually what I do is get a system info report from the user, and steps to reproduce, then search on keywords, and try to see any sort of pattern.
Verify the steps used to produce the error
Oftentimes the people reporting the error, or the people reproducing the error, will do something wrong and not end up in the same state, even if they think they are. Try to walk it through with the reporting party. I've had a user INSIST that the admin privileges were not appearing correctly. I tried reproducing the error and was unable to. When we walked it through together, it turned out he was logging in as a regular user in that case.
Verify the system/environment used to produce the error
I've found many 'irreproducible' bugs and only later discovered that they ARE reproducible on Mac OS (10.4) Running X version of Safari. And this doesn't apply only to browsers and rendering, it can apply to anything; the other applications that are currently being run, whether or not the user is RDP or local, admin or user, etc... Make certain you get your environment as close to theirs as possible before calling it irreproducible.
Gather Screenshots and Logs
Once you have verified that the user is doing everything correctly and still getting a bug, and that you're doing exactly what they do, and you are NOT getting the bug, then it's time to see what you can actually do about it. Screenshots and logs are critical. You want to know exactly what it looks like, and exactly what was going on at the time.
It is possible that the logs could contain some information that you can reproduce on your system, and once you can reproduce the exact scenario, you might be able to coax the error out of hiding.
Screenshots also help with this, because you might discover that "X piece has loaded correctly, but it shouldn't have because it is dependent on Y" and that might give you a hint. Even if the user can describe what were doing, a screen shot could help even more.
Gather step-by-step description from the user
It's very common to blame the users, and not trust anything that they say (because they call a 'usercontrol' a 'thingy') but even though they might not know the names of what they're seeing, they will still be able to describe some of the behaviour they are seeing. This includes some minor errors that may have occured a few minutes BEFORE the real error occurred, or possibly slowness in certain things that are usually fast. All these things can be clues to help you narrow down which aspect is causing the error on their machine and not yours.
Try Alternate Approachs to produce the error
If all else fails, try looking at the section of code that is causing problems, and possibly refactor or use a workaround. If it is possible for you to create a scenario where you start with half the information already there (hopefully in UAT) ask the user to try that approach, and see if the error still occurs. Do you best to create alternate but similar approaches that get the error into a different light so that you can examine it better.
Short answer: Conduct a detailed code review on the suspected faulty code, with the aim of fixing any theoretical bugs, and adding code to monitor and log any future faults.
Long answer:
To give a real-world example from the embedded systems world: we make industrial equipment, containing custom electronics, and embedded software running on it.
A customer reported that a number of devices on a single site were experiencing the same fault at random intervals. Their symptoms were the same in each case, but they couldn't identify an obvious cause.
Obviously our first step was to try and reproduce the fault in the same device in our lab, but we were unable to do this.
So, instead, we circulated the suspected faulty code within the department, to try and get as many ideas and suggestions as possible. We then held a number of code review meetings to discuss these ideas, and determine a theory which: (a) explained the most likely cause of the faults observed in the field; (b) explained why we were unable to reproduce it; and (c) led to improvements we could make to the code to prevent the fault happening in the future.
In addition to the (theoretical) bug fixes, we also added monitoring and logging code, so if the fault were to occur again, we could extract useful data from the device in question.
To the best of my knowledge, this improved software was subsequently deployed on site, and appears to have been successful.
resolved "sterile" and "spooky"
We have two closed bug categories for this situation.
sterile - cannot reproduce.
spooky - it's acknowledged there is a problem, but it just appears intermittently, isn't quite understandable, and gives everyone a faint case of the creeps.
Error-reporting, log files, and stern demands to "Contact me immediately if this happens again."
If it happens in one context, and not in another, we try to enumerate the difference between both, and eliminate them.
Sometimes this works (e.g. other hardware, dual core vs. hyperthreading, laptop-disk vs. workstation disk, ...).
Sometimes it doesn't. If it's possible, we may start remote-debugging. If that doesn't help, we may try get our hands on the customer's system.
But of course, we don't write too many bugs in the first place :)
Well, you try your best to reproduce it, and if you can't, you take a long think and consider how such a problem might arise. If you still have no idea, then there's not much you can do about it.
Some of the new features in Visual Studio 2010 will help. See:
Historical Debugger and Test Impact Analysis in Visual Studio Team System 2010
Better Software Quality with Visual Studio Team System 2010
Manual Testing with Visual Studio Team System 2010
Sometimes the bug is not reproducible even in a pre-production environment that is the exact duplicate of the production environment. Concurrency issues are notorious for this.
Random Failures Are Often Concurrency Issues
Link: https://pragprog.com/tips/
The reason can be simply because of the Heisenberg effect, i.e. observation changes behaviour. Another reason can be because the chances are very small of hitting the combination of events that triggers the bug.
Sometimes you are lucky and you have audit logs that you can playback, greatly increasing the chances of recreating the issue. You can also stress the environment with high volumes of transactions. This effectively compresses time so that if the bug occurs say once a week, you may be able to reliably reproduce it in 1 day if you stress the system to 7 X the production load.
The last resort is whitebox testing where you go through the code line by line writing unit tests as you go.
I add logging to the exception handling code throughout the program. You need a method to collect the logs (users can email it, etc.)
Preemptive checks for code versions and sane environments are a good thing too. With the ease of software updates these days the code and environment the user is running has almost certainly not been tested. It didn't exist when you released your code.
With a web project I'm developing at the moment I'm doing something very similar to your technique. I'm building a page that I can direct users to in order to collect information such as their browser version and operating system. I'll also be collecting the apps registry info so i can have a look at what they've been doing.
This is a very real problem. I can only speak for web development, but I find users are rarely able to give me the basic information I would need to look into the issue. I suspect it's entirely possible to do something similar with other kinds of development. My plan is to keep working on this system to make it more and more useful.
But my policy is never to close a bug simply because I can't reproduce it, no matter how annoying it may be. And then there's the cases when it's not a bug, but the user has simply gotten confused. Which is a different type of bug I guess, but just as important.
You talk about problems that are reproducible but only on some systems. These are easy to handle:
First step: By using some sort of remote software, you let the customer tell you what to do to reproduce the problem on the system that has it. If this fails, then close it.
Second step: Try to reproduce the problem on another system. If this fails, make an exact copy of the customers system.
Third step: If it still fails, you have no option than to try to debug it on the customer system.
Once you can reproduce it, you can fix it. Doesn't matter on what system.
The tricky issue are truly non-reproducible issues, that is things that happen only intermittently. For that I'll have to chime in with the reports, logs and stern demands attitude. :)
It is important to categorize such bugs (rarely reproducible) and act on them differently than bugs that are frequently reproducible based on specific user actions.
Clear issue description along with steps to reproduce and observed behavior: Unambiguous reporting helps in understanding of the issue by entire team eliminating incorrect conclusions. For example, user reporting blank screen is different than HMI freeze on user action. Sequence of steps and approx timing of user action is also important.Did the user immediately select the option after screen transition or waited for a few minutes? An interesting bug concerning timing is a car allergic to vanilla ice-cream that baffled automotive engineers.
System config and startup parameters: Sometimes even hardware configuration and application software version (including drivers and firmware version) may do a trick or two. Mismatch of version or configuration can result in issues that are difficult to reproduce in other setups. Hence these are essential details to be captured. Most bug reporting tools have these details as mandatory parameters to report while logging an issue.
Extensive Logging: This is dependent on the logging facilities followed in concerned projects. While working with embedded Linux systems, we not only provide general diagnostic logs, but also system level logs like dmesg or top command logs. You may never know that wrong part is not the code flow but the abnormal memory usage/CPU usage. Identify the type of the issue and report the relevant logs for investigation.
Code Reviews and Walk-through: Dev teams cannot wait forever to reproduce these issues at their end and then take action. Bug report and available logs should be investigated and various possibilities be identified on this basis from design and code. If required, they should prepare hotfix on possible root causes and circulate the hotfix among teams including the tester who identified it to see if bug is reproducible with it.
Don't close these issues based on observation by a single tester/team after a fix is identified and checked in: Perhaps the most important part is approach followed to close these issues. Once fix of these issues has been checked in, all testing/validation teams at different locations should be informed on it for running intensive tests and identifying regression errors if any. Only all (practically most of them) of them reports as non-reproducible, a closure assessment has to be done by senior management.
If it is not reproduce able get logs, screen shots of exact steps to reproduce.
There's a nice new feature in Windows 7 that allows the user to record what they're doing and then send a report - it comes through as a doc with screen-shots of every stage. Hopefully it'll help in the cases where it's the user interacting with the application in an order that the developer wouldn't think of. I've seen plenty of bugs where it's just a case that the developer's logical way of using the app doesn't fit with how end users actually do it... resulting in lots of subtle errors.
Logging is your friend!
Generally what happens when we discover a bug that we can't reproduce is we either ask the customer to turn on more logging (if its available), or we release a version with extra logging added around the area we are interested in. Generally speaking the logging we have is excellent and has the ability to be very verbose, and so releasing versions with extra logging doesn't happen often.
You should also consider the use of memory dumps (which IMO also falls under the umbrella of logging). Producing a minidump is so quick that it can usually be done on production servers, even under load (as long as the number of dumps being produced is low).
The way I see it: Being able to reproduce a problem is nice because it gives you an environment where you can debug, experiement and play around in more freely, but - reproducing a bug is by no means essential to debug it! If the bug is only happening on someone else system then you still need to diagnose and debug the problem in the same way, its just that this time you need to be cleverer about how you do it.
The accepted answer is the best general approach. At a high level, it's worth weighing the importance of fixing the bug against what you could add as a feature or enhance that would benefit the user. Could a 'non-reproducible' bug take two days to fix? Could a feature be added in that time that gives users more benefit than that bug fix? Maybe the users would prefer the feature. I've been fixated at times as a developer on imperfections I can see, and then users are asked for feedback and none of them actually mention the bug(s) that I can see, but the software is missing a feature that they really want!
Sometimes, simple persistence in attempting to reproduce the bug whilst debugging can be the most effective approach. For this strategy to work, the bug needs to be 'intermittent' rather than completely 'non-reproducible'. If you can repeat a bug even one time in 10, and you have ideas about the most likely place it's occurring, you can place breakpoints at those points then doggedly attempt to repeat the bug and see exactly what's going on. I've experienced this to be more effective than logging in one or two cases (although logging would be my first go-to in general).

What should a good BugTracking tool be capable of? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I found a lot of questions asking for the best tool, but none asking for the features, you really need? And what features you never really needed?
(I caught myself to be comparing tools on feature matrices. Something I hate, because in the end I will be using only the 3-4 most important features and leave the rest untouched.)
It need to:
collect bugs
order bugs on priority/severity/due date etc
assign bugs to developers
track a bug history
link similar bugs together
link bugs to customers
link solved bugs to releases
provide enough information or a reference to get the information to reproduce the bug
usable by more than one developer
bug status need to be accessible by the client that reported the bug.
And there are more.
Simple end user data entry. Without this you won't have bugs entered, which equals worthless bug tool.
I can't answer this question for you, because I can't predict what is important for you, or what your situation is:
Are you on a large or small development team, or are you a one-man shop?
Would it be useful to have a system in place where you could have your application automatically send in trouble reports that create incidents in your bug tracking software?
Is being able to predict a release schedule important, or is this just something for a side project you're doing in your spare time?
Is integration with source control important?
In reality, you're the only one who can answer what features are required for you.
Those are the 3 must-have features I find most important:
Web interface so people can follow-up
Source control integration, otherwise it's really hard to track who did what and deploy patches
Configurable workflow with email notifications
Things I would really like to see:
1) Voting - i.e. how many customers/users does this bug hurt?
2) Severity/priority/whatever - the distinction between these terms is subtle and normally (IMHO) insignificant, but you have to have some idea of how important the bug is. Most tools have this, but overcomplicate it.
3) Dependencies - both internal (on other bugs in the same system) and external (external libraries, software, etc). Most bugs have this in reality, but it's not normally expressible in the database, leading to long, pointless debates at triage time.
Things I think are largely pointless:
1) Any extensive questionnaires - any bug-tracker that asks too many questions will just get bad data. That's worse than none.
2) Controversial, but compulsory daily/weekly/whatever email notifications. They just get filed as spam/ignored/filtered out. If developers should be fixing bugs, and aren't, that's a management problem. Software cannot fix this.
Need:
Email notification.
Status
Group notify
Group rights
Web interface
Easy / fast interface

Has Microsoft paid support been worth it to you? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I saw this quote in this question:
MS support is poor, except when you've
paid for a support contract... then
its very very good. – gbjbaanb
This got me thinking. My company has had 2 support incidents with Microsoft a few months ago (before Stack Overflow was live). In both cases, we were pushing the limits of the SharePoint system we were building, and the API did not expose events to let us know when operations were completed. Both times, Microsoft's response was to tell us to add a Thread.Sleep() call and just wait for the operation to finish.
Now, I am all for the community working together to answer questions, but sometimes there just aren't answers to your question available online.
For the times when you can't find an answer, has paid support been worth it to you?
If you have had success with Microsoft paid support, please share what type of problem it was. I am trying to understand exactly what to expect from a support incident.
Yes it has. If you have made an effort to solve the problem yourself, you have to get to at least the third person before you get anybody who will understand the issue, and maybe the fourth person can help.
Once you get to that person, though -- I've found the support to be unbelievably good with a lot of followup and I even had an MS support person help me solve a complex ASP.NET deployment issue with one of our customers.
It's been mixed.
If you get lucky and talk to the right person they can provide invaluable insight.
On the other hand you might end up with another sympathetic pair of eyes. This can be helpful but not worth the money.
I too have had a good experience, but as with some of the others I will say that you have to escalate the issue typically to get resolution. Once you do though, they follow-through very well, and provided detailed assistance.
I have had amazingly good luck calling MS for support. Usually, calling any large company for support is a nightmare. The exceptions, for me, have been MS and Oracle. In the case of MS, I have made 3 calls to them, 2 of which were IT related and one was a programming or Visual Studio related problem. Sorry I don't remember what the issues were. I recall having a little trouble understanding one of the reps due to accent differences between us, but he was very good and we got through it. This was about 3 years ago.
One other thing is I have found that the reps want to give you a break. If it is a bug, they won't charge you, and all the reps were very fair on this in my experience.
Yes, I have had an opportunity to use paid support from Microsoft. I was working for a dot-com and we were having intermittent crashes on our main site which used ASP. I had been asked to run this utility to generate a crash dump when the error popped up on the site and had sent a few in to be analyzed. The analysis suggested that the number of string concatenations being done in the code was a potential issue as this was causing the memory to fragment and that led to the issue as the code had hundreds of concatenations, sometimes where the code didn't even need to do it, e.g. in a line there would be " blah blah blah " & " more more more" where the & is unnecessary and causing part of the issue, as well as suggesting that a string array may be better for handling taking string after string to build part of the response for some pages.
I've only had one experience with Microsoft support, and in that instance I found it to be lacking. After a few hours on the phone they concluded that there was no way to solve my problem (unable to modify any website settings in IIS) and we would have to reinstall everything. I later found a way to fix it myself without going through all of that by manually editing a file. This was about 8 years or so ago though, I haven't had to deal with them since then.

Resources