Need to perform load testing for the 1000 virtual users. But due to lack of users credentials cannot perform it. So can any one explain me how to simulate a new user on each iterations. I have already enabled the Simulate a new user on each iteration and also enabled the Clear cache on each iteration but still getting the same session id for the multiple iterations.
We have SSO integrated with our application and just created the simple Sign In and Sign Out scenario under Action.c with 4 iterations.
Below is the logs which i am getting after executing the script. For each iterations, the session id is remaining same
Iteration 1:
Action.c(110): ************** SESSION ID ************** : 1e9e644f-7023-4641-b53d-4a8db900a8c9
Iteration 2:
Action.c(110): ************** SESSION ID ************** : 1e9e644f-7023-4641-b53d-4a8db900a8c9
Iteration 3:
Action.c(110): ************** SESSION ID ************** : 1e9e644f-7023-4641-b53d-4a8db900a8c9
Iteration 4:
Action.c(110): ************** SESSION ID ************** : 1e9e644f-7023-4641-b53d-4a8db900a8c9
And my Run time setting looks like below:
is it possible, that your app-server runs behind a loadbalancer? We have sometimes trouble with sticky-sessions during loadtesting, because requesting from the same IP, so the sessions is cached on the proxy/LB.
Or maybe you found a bug in your app...
Looking at your script here https://gist.github.com/tejas1493/540ab8e39a1ab21d560a3872667be315 you are logging the client_id parameter that you have correlated when landing on the login page.
By the looks of your login request it is using spring and openid. With openid the client_id is a unique identifier for the client, so will always be the same and is not related to the individual session.
https://connect2id.com/learn/openid-connect
Related
Context
I am trying to reproduce the case where a session gets closed by inactivity.
There is 'manual' way of doing it, which is to run SYSTEM$ABORT_SESSION(<session_id>), making your session no longer active.
That's ok, but I'm trying to understand how a real life scenario would work. The default idle time is 4 hours, so I tried using a session policy with a smaller timeout, therefore making it more acceptable to make some unit tests.
Steps
Created a session policy with a 5 minutes idle timeout
CREATE OR REPLACE SESSION POLICY my_session_policy
session_idle_timeout_mins = 5
comment = 'temporary session policy for testing purposes'
Attached it to a NEW user
CREATE USER my_newly_created_user;
ALTER USER my_newly_created_user SET disabled=true;
-- here
ALTER USER my_newly_created_user SET SESSION POLICY my_session_policy;
ALTER USER my_newly_created_user SET disabled=false;
Created a connection for my new user
const connection = snowflake.createConnection({
account: MY_ACCOUNT,
username: 'my_newly_created_user',
password: NEW_USER_PASSWORD,
database: MY_DB,
application: someRandomName(),
clientSessionKeepAlive: false // it's supposed to default to false anyway
})
const client = await connection.connect()
Run some query with client, just to test if the session was successfully initiated.
Then waited for more than 5 minutes (exceeded idle timeout) with no session activity, expecting the session to be automatically closed, but...
the session still works, and it is actually listed as "open" in the Snowflake Web UI.
I was expecting my Snowflake client to throw a 'terminated sessions' error.
Platform
All of above queries and code were performed on Node using the official snowflake connector at its last version (1.6.10).
There are pieces of Snowflake docs that are contradictory to what happens in my case.
A session policy defines the idle session timeout period in minutes
and provides the option to override the default idle timeout value of
4 hours.
The timeout period begins upon a successful authentication to
Snowflake. If a session policy is not set, Snowflake uses a default
value of 240 minutes (i.e. 4 hours). The minimum configurable idle
timeout value for a session policy is 5 minutes.
Conclusion
As far as I concern, my session policy should override the default idle timeout, therefore making sessions of my_newly_created_user useless after 5 minutes.
Am I missing some step to make my_newly_created_user to use my_session_policy? The docs are not very explicit regarding this. And I don't know how to debug that.
Can you check if the enforce_session_policy parameter is set to true?
show parameters like 'enforce_session_policy' in account;
The default value is false, so you need to set it to true like so:
alter account set enforce_session_policy = true;
Once you have changed the parameter to true, try to connect and wait 5 minutes. After 5 minutes, try to execute a query.
You will get an error message like:
Session no longer exists.
Team,
I have a Azure website published on Azure. The application reads around 30000 employees from an API and after the read is successful, it updates the secondary redis cache with all the 30,000 employees.
The timeout occurs in the second step whereby when it updates the secondary redis cache with all the employees. From my local it works fine. But as soon as i deploy this to Azure, it gives me a
500 - The request timed out.
The web server failed to respond within the specified time
From the blogs i came to know that the default time out is set as 4 mins for azure website.
I have tried all the fixes provided on the blogs like setting the command SCM_COMMAND_IDLE_TIMEOUT in the application settings to 3600.
I even tried putting the Azure redis cache session state provider settings as this in the web.config with inflated timeout figures.
<add type="Microsoft.Web.Redis.RedisSessionStateProvider" name="MySessionStateStore" host="[name].redis.cache.windows.net" port="6380" accessKey="QtFFY5pm9bhaMNd26eyfdyiB+StmFn8=" ssl="true" abortConnect="False" throwOnError="true" retryTimeoutInMilliseconds="500000" databaseId="0" applicationName="samname" connectionTimeoutInMilliseconds="500000" operationTimeoutInMilliseconds="100000" />
The offending code responsible for the timeout is this:
`
public void Update(ReadOnlyCollection<ColleagueReferenceDataEntity> entities)
{
//Trace.WriteLine("Updating the secondary cache with colleague data");
var secondaryCache = this.Provider.GetSecondaryCache();
foreach (var entity in entities)
{
try
{
secondaryCache.Put(entity.Id, entity);
}
catch (Exception ex)
{
// if a record fails - log and continue.
this.Logger.Error(ex, string.Format("Error updating a colleague in secondary cache: Id {0}, exception {1}", entity.Id));
}
}
}
`
Is there any thing i can make changes to this code ?
Please can anyone help me...i have run out of ideas !
You're doing it wrong! Redis is not a problem. The main request thread itself is getting terminated before the process is completed. You shouldn't let a request wait for that long. There's a hard-coded restriction on in-flight requests of 230-seconds max which can't be changed.
Read here: Why does my request time out after 230 seconds?
Assumption #1: You're loading the data on very first request from client-side!
Solution: If the 30000 employees record is for the whole application, and not per specific user - you can trigger the data load on app start-up, not on user request.
Assumption #2: You have individual users and for each of them you have to store 30000 employees data, on the first request from client-side.
Solution: Add a background job (maybe WebJob/Azure Function) to process the task. Upon request from client - return a 202 (Accepted with the job-status location in the header. The client can then poll for the status of the task at a certain frequency update the user accordingly!
Edit 1:
For Assumption #1 - You can try batching the objects while pushing the objects to Redis. Currently, you're updating one object at one time, which will be 30000 requests this way. It is definitely will exhaust the 230 seconds limit. As a quick solution, batch multiple objects in one request to Redis. I hope it should do the trick!
UPDATE:
As you're using StackExchange.Redis - use the following pattern to batch the objects mentioned here already.
Batch set data from Dictionary into Redis
The number of objects per requests varies depending on the payload size and bandwidth available. As your site is hosted on Azure, I do not thing bandwidth will be much of a concern
Hope that helps!
I have the following scenario using Jmeter 3.3:
I would like to run "Get auth token" once every 2.5 minutes and meanwhile to run [GET] thread group non stop.
In other words, [GET] is taking the auth token from the first thread group and I would like to run them in parallel and only change the token once at 2.5 min.
I tried to add a Constant Timer to the first thread but the second thread is not running until the timer passed.
How can I keep the [GET] running non-stop and "Get Auth token" only once at 2.5min?
LE:
[GET] Thread is used for load tests and should be run with ~100 active users (all using the same token)
Constant Timer was added under HTTP Sampler
> Get Auth token
>> [POST] Auth token
>>> HTTP Header Manager
>>> Regular Expression Extractor
>>> Response Assertion
>>> Constant Timer
LE2:
I have tried adding a Test action under the first thread group. I did not manage to make Thread 2 run without waiting after Thread's 1 delay.
As per Functions and Variables User Manual chapter
Properties are not the same as variables. Variables are local to a thread; properties are common to all threads, and need to be referenced using the __P or __property function.
So I would suggest converting your authToken into a JMeter Property via __setProperty() function in Get auth token Thread Group and refering the value in Get Thread Group using __P() function so once the authToken value is updated all the threads will be using the new value instead of the old one.
More information: Knit One Pearl Two: How to Use Variables in Different Thread Groups
I am planning to use OpenAM SSOTokenListener for capturing different logout events.
SSO_TOKEN_DESTROY
SSO_TOKEN_IDLE_TIMEOUT
SSO_TOKEN_MAX_TIMEOUT
SSO_TOKEN_PROPERTY_CHANGED
but in ssoTokenChanged method SSOTokenEvent type is always SSO_TOKEN_DESTROY
How to identify SSO_TOKEN_IDLE_TIMEOUT event ?
Summary:
Domino server 8.5.3 Windows server 2008 FP2.
When calling
NotesAgent.runOnServer(noteid)
from a web browser in a Thread where agent is set to "run as web user", I am getting error "HTTP JVM: You are not authorized to perform that operation".
Detail:
All web requests come through 2 channels into our application, via a notes agent or via an xpage (that acts like an XAgent). We have a back end process that can take up to 20 seconds to complete, it is a call to a remote web service and we have no control over it. Due to requirements we can not queue these documents for a scheduled agent, they need to go immediately...or as immediately as the service will allow! The 2 main problems are: 1..the user has to wait up to 20 seconds, 2..the http thread is not freed up. During a busy time, we have seen the http thread pool saturate. What I have done in my test environment is send the request into the XAgent, this calls our backing bean which starts a separate thread, returns message to user. It's working great, http thread frees up immediately and a timely response for user and submission to web service proceeds "asynchronously".
The logic calling the web service is in LotusScript, converting to java would be a massive job as there are an enormous amount of interconnected processes in LotusScript. In the java thread the username is the server name, effectiveUserName is the authenticated http user, the thread calls a
NotesAgent.runOnServer(noteid)
, which works, except the agent runs with credentials of the user that signed the agent. If we set the agent to "Run as web user", I get the error above. As a test, I moved the code that triggers the NotesAgent.run() into the main "calling" function, which gets it's session via:
JSFUtil.getVariableValue("session")
and this works as expected (user=server, http user=effective user). The thread session is got like this:
this.module = NotesContext.getCurrent().getModule();
this.sessionCloner = SessionCloner.getSessionCloner();
NotesContext context = new NotesContext( this.module );
NotesContext.initThread( context );
session = this.sessionCloner.getSession();
...and as above, the effective User Name is the authenticated http user, the user name is the server name.
If I browse directly to the agent, e.g. .../myapp.nsf/myagent?openagent, the agent will run as the effective http user. I then put my test http user into the highest security group I have on my test server, same error. I then logged in as a server admin user (that has security settings for everything) and got same error.
On my test serrver I have: Domino\jvm\lib\security\java.policy when running the Job from the NSF:
grant {
permission java.security.AllPermission;
};
Since I can trigger the agent via JSFUtil.getVariableValue("session") is there some security difference when getting a session via SessionCloner.getSessionCloner().getSession() ?
Thanks in advance.
Agents and XPages shall not interbreed :-). For the thread I would remove the need to get the web user. Pass a java object to the thread that does not contain any Notes objects. Then go old school and use sInitThread stermThread to get a shiny new session and run the agent from there.