In the user guide section on session.timeout.auto.extend, Liferay specifies:
It is recommended to use this setting along with a smaller session.timeout,
such as 5 minutes, for better performance.
What exactly does 'better performance' mean ?
I haven't changed my session timeout and some of my users are getting lost sessions. I assume changing this value might solve the issue, but I would really like to understand better.
Thanks,
Alain
What exactly does 'better performance' mean ?
Lets say you have 10 Users who are currently logged in. If session.timeout value specified is 5 minutes, then, if these 10 Users are idle for 5 minutes their session will get destroyed. Its a good practice (and everyone does) that you close any open connections (like DB), free the thread and perform other resource management actions so that other Users who are going to login can use them.
But lets say the session.timeout value specified is 20 minutes. Then the resources allocated to these 10 Users will get freed only after they go idle for 20 minutes (unless they themselves logout). This example was explained only with 10 users. But, in real time, there would be thousands to lakh users who will be using your resources.
In summary, better performance can be achieved by keeping a lower value for session.timeout because you will be freeing the allocated threads and other resources quickly.
Related
My extension(manifest v3) needs to track the number of times a set of websites are visited either during the whole day or during certain time windows and then perform an action if the visit count exceeds a limit.
There are two ways I could think of implementing this:
alarm + history: Create an alarm that runs every 5 mins, search the history for the required websites and count the visits. If the count exceeds the limit perform an action
storage + history: Add a listener to chrome.history.onVisited. If the visited site is from the required list, increment the visit count in storage. If the storage count exceeds the limit perform an action
Which of the above approaches has least impact on Chrome's browsing performance? Or, is there any another api(s) that I can use to achieve the same?
I would like my extension to consume least amount of user's battery :)
In 1 the extension will do a lot of unnecessary work when the user isn't using the browser.
In 2 the extension's background script will restart more often if the user navigates a lot but makes pauses between navigating for more than the lifetime duration of the service worker (30 seconds by default), which is a typical interaction scenario.
In both cases the bigger inherent problem of ManifestV3 for an extension such as yours that observes user activity is not what the extension does itself, but the extremely huge overhead to restart the background worker, which is automatically terminated after 30 seconds since the last observed event (or 5 minutes if you use waitUntil). Such pauses in user activity are typical when browsing/interacting so for many users the worker will restart hundreds of times a day. Starting the worker takes 50-100ms and stresses the CPU+memory+disk for the entire duration, while a typical time spent in a simple observation extension's code is just 1-2ms.
In other words, an extension that observes user activity, such as yours, is inherently 25-100 times less efficient in ManifestV3 than it would be in ManifestV2 with a persistent background script.
Solutions.
Prolong the service worker's lifetime to reduce the amount of its restarts as shown here. To avoid wasting memory for users that keep the browser open without using it for hours you can dynamically adjust the lifetime duration by measuring and averaging intervals between the events or offer an option to set the duration in your extension UI. Hopefully, in the future the browser will do it automatically, but it may take years before this feature is actually implemented and even then it will still likely restart the background script way too often.
Use chrome.webNavigation events with a URL filter for your sites so that the background script wakes up only when these specific URLs are visited. If the URLs are configured by the user, you will need to unregister the listener first (e.g. by making the listener a named global function), then register it with the new URL filter. You may still need to prolong the worker's lifetime if these URLs are visited a lot.
My question is somehow related to this and this , but somehow doesn't really answer my question.
My test case is pretty simple, I need to generate constant active concurrent users (e.g. 10 concurrent users) constantly over a period of time (e.g. 60 sec).
My codes look like this
val TestProtocolBuilder: HttpProtocolBuilder = http.baseUrl("https://computer-database.gatling.io")
object Test {
val test =
exec(http("Test_Only")
.get("/computers")
.check(status.in(200, 404))
)
}
val TestOnly = scenario("Test Only").exec(Test.test)
setUp(
TestOnly.inject(
constantConcurrentUsers(10) during(60 seconds)
).protocols(TestProtocolBuilder)
)
This documentation says
constantConcurrentUsers(nbUsers) during(duration) : Inject so that number of concurrent users in the system is constant
I am expecting to get 10 concurrent active users constantly hitting the API for 60 seconds. No more than 10 users and no less than 10 users at any time.
What I see in the HTML report is that the active users at any given time is much higher than 10 (almost double).
Learning from the documentation , it says
This chart displays the active users during the simulation : total and per scenario.
“Active users” is neither “concurrent users” or “users arrival rate”. It’s a kind of mixed metric that serves for both open and closed workload models and that represents “users who were active on the system under load at a given second”.
It’s computed as:
(number of alive users at previous second) + (number of users that were started during this second) - (number of users that were terminated during previous second)
Questions:
Why Gatling keep terminating users and starting new users during the test period? What's the point?
How can I get a constant 10 concurrent active users (no more, no less) keep hitting my API for the duration of the test, if constantConcurrentUsers(10) during(60 seconds) give me a much higher active users and keep fluctuating during the test period ? I need to stick to my test case and not over-loading the API.
In the image above, at that given time the number of request = 7 and the active users = 20. Does it mean that at that given time, there are 7 active users sending out requests and there are 20 - 7 = 13 active users sitting idle waiting for the responses to come back from the API ?
Thanks.
Why Gatling keep terminating users and starting new users during the test period? What's the point?
Virtual users' lifespan is driven by your scenario. Injection profiles only drive when they are injected/started.
If you want to have your users not terminate after just one requests, add a loop in your scenario.
How can I get a constant 10 concurrent active users
Nonsensical. As you quoted yourself concurrent != active. I guarantee you have a constant number of concurrent users, meaning exactly 10 users alive at the same time. The thing is that as your scenario only has 1 single request, users terminate right after and are replaced with a new one.
Does it mean that at that given time, there are 7 active users sending out requests and there are 20 - 7 = 13 active users sitting idle waiting for the responses to come back from the API ?
It means virtual users lifespan overlapped between 2 seconds so they were seen alive in 2 different second buckets.
I wanted to process records from a database concurrently and within minimum time. So I thought of using parallel.foreach() loop to process the records with the value of MaximumDegreeOfParallelism set as ProcessorCount.
ParallelOptions po = new ParallelOptions
{
};
po.MaxDegreeOfParallelism = Environment.ProcessorCount;
Parallel.ForEach(listUsers, po, (user) =>
{
//Parallel processing
ProcessEachUser(user);
});
But to my surprise, the CPU utilization was not even close to 20%. When I dig into the issue and read the MSDN article on this(http://msdn.microsoft.com/en-us/library/system.threading.tasks.paralleloptions.maxdegreeofparallelism(v=vs.110).aspx), I tried using a specific value of MaximumDegreeOfParallelism as -1. As said in the article thet this value removes the limit on the number of concurrently running processes, the performance of my program improved to a high extent.
But that also doesn't met my requirement for the maximum time taken to process all the records in the database. So I further analyzed it more and found that there are two terms as MinThreads and MaxThreads in the threadpool. By default the values of Min Thread and MaxThread are 10 and 1000 respectively. And on start only 10 threads are created and this number keeps on increasing to a max of 1000 with every new user unless a previous thread has finished its execution.
So I set the initial value of MinThread to 900 in place of 10 using
System.Threading.ThreadPool.SetMinThreads(100, 100);
so that just from the start only minimum of 900 threads are created and thought that it will improve the performance significantly. This did create 900 threads, but it also increased the number of failure on processing each user very much. So I did not achieve much using this logic. So I changed the value of MinThreads to 100 only and found that the performance was much better now.
But I wanted to improve more as my requirement of time boundation was still not met as it was still exceeding the time limit to process all the records. As you may think I was using all the best possible things to get the maximum performance in parallel processing, I was also thinking the same.
But to meet the time limit I thought of giving a shot in the dark. Now I created two different executable files(Slaves) in place of only one and assigned them each half of the users from DB. Both the executable were doing the same thing and were executing concurrently. I created another Master program to start these two Slaves at the same time.
To my surprise, it reduced the time taken to process all the records nearly to the half.
Now my question is as simple as that I do not understand the logic behind Master Slave thing giving better performance compared to a single EXE with all the logic same in both the Slaves and the previous EXE. So I would highly appreciate if someone will explain his in detail.
But to my surprise, the CPU utilization was not even close to 20%.
…
It uses the Http Requests to some Web API's hosted in other networks.
This means that CPU utilization is entirely the wrong thing to look at. When using the network, it's your network connection that's going to be the limiting factor, or possibly some network-related limit, certainly not CPU.
Now I created two different executable files … To my surprise, it reduced the time taken to process all the records nearly to the half.
This points to an artificial, per process limit, most likely ServicePointManager.DefaultConnectionLimit. Try setting it to a larger value than the default at the start of your program and see if it helps.
Running a single script with only two users as a single scenario without any pacing, just think time set to 3 seconds and random (50%-150%) I experience that the web app server runs of of memory after 10 minutes every time (I have run the test several times, and it happens at the same time every time).
First I thouhgt this was a memory leak in the application, but after some thought I figured it might have to do with the scenario design.
The entire script having just one action including log in and log out within the only action block takes about 50 seconds to run and I have the default as soon as the previous iteration ends set not the with delay after the previous iteration ends or fixed/random intervalls set.
Could not using fixed/random intervalls cause this "memory leak" to happen? I guess non of the settings mentioned would actually start a new iteration before the one before ends, this obvioulsy leading to accumulation of memory on the server resulting in this "memory leak". But with no pacing set is there a risk for this to happen?
And having no iterations in my script, could I still be using pacing?
To answer your last question: NO.
Pacing is explicitly used when a new iteration starts. The iteration start is delayed according to pacing settings.
Speculation/Conclusions:
If the web-server really runs out of memory after 10 minutes, and you only have 2 vu's, you have a problem on the web-server side. One could manually achieve this 2vu load and crash the web-server. The pacing in the scripts, or manual user speeds are irrelevant. If the web-server can be crashed from remote, it has bugs that need fixing.
Suggestion:
Try running the scenario with 4 users. Do you get OUT OF MEMORY on the web-server after 5 mins?
If there really is a leak, your script/scenario shouldn't be causing it, but I would think that you could potentially cause it to appear to be a problem sooner depending on how you run it.
For example, let's say with 5 users and reasonable pacing and think times, the server doesn't die for 16 hours. But with 50 users it dies in 2 hours. You haven't caused the problem, just exposed it sooner.
i hope its web server problem.pacing is nothing but a time gap in between iterations,it's not effect actions or transactions in your script
is there a Best Practice or industry standard for the length of "time out" for web pages for Ecommerce businesses with website containing Personal Identity Information?
I would think during QA testing you would keep track of the average length of time it takes a user to do a task on the site. With this average in mind you would adjust accordingly and figure out the standard deviation, using that to make a good timeout time.
An example..
It takes a user an average of 5 minutes to perform a task. Let's say your SD is 2, so you'd have 5-2SD on the low end, so one minute, and 5+2SD on the high end, so that's 9 minutes. Take the high end and display a warning that the user is about to be logged off, then wait one minute and log them off automatically.
I think it's too dependent on the scope. A banking site is going to have a shorter timeout than a forum. 5 minutes is probably a good standard for important things, 20 minutes for less important things.