When I launch Jupyter lab session it is not logging me off .I observed that the inactive session timeout for the web application is not adequate. After testing for about one hour, I concluded that the inactive session timeout was either not configured or longer than 1 hour. I want to set session timeout for this problem, So that session must logoff when user is away for a longer period of time.
1-What is the name of that file?
2-where the file is located?
3-what parameters need to change to resolve (No session timeout).
Kindly provide answer.
Related
I have an Azure Function as a ServiceBusTrigger listening on a queue that has session enabled.
The producer may send messages with a very wide variety of session IDs (let's say 1,000 different values for the session ID).
By default, the Azure Function host will allow 8 concurrent sessions to be processed.
What this means is that out of the 1,000 session IDs, only 8 will be processed at any given time. So when the Azure Function host starts, the first 8 session IDs will have their messages processed. If one of these session IDs is idle for one minute (i.e. if one of these session IDs does not have a message for more than one minute), then its lock will be released and a new session (the ninth one) will have its messages processed.
So what this means is that if the first 8 session IDs each receive at least one message per minute, their lock will not be released and their consumer will not be allowed to process another session ID, thus leaving all remaining 992 session IDs to starve (i.e. they will never be given a chance to get their messages processed).
Obviously, I could update my host.json so that maxConcurrentSessions is set to 1,000. But I don't like this solution because it means that my configuration is hardcoded to my system's current requirements, but these requirements may vary over time i.e. I would have to find a way to monitor that session IDs are not starving because 6 months from now, maybe I would need to increase maxConcurrentSessions to 2,000.
What I am looking for is a mechanism that would auto-adjust itself. For instance, it seems to me that the Azure Service Bus extension is missing a setting that would represent a maximum time-to-live for the lock. For instance, I should be allowed to specify something like:
{
"extensions": {
"serviceBus": {
"sessionIdleTimeout": "00:00:15",
"sessionTimeToLive": "00:00:30"
}
}
}
With a configuration like this, what I would be basically saying is that if a session ID does not receive messages for 15 seconds, then its lock should released so that another session ID can be given a chance to process. Additionally, the TTL would kick in because if that same session ID is constantly receiving a new message every second, then its lock would be forcibly released after 30 seconds despite that session ID having more messages needing to be processed; this way, another session ID is given a chance at processing.
Now given that there is nothing functionally equivalent to sessionTimeToLive in Azure Service Bus to my knowledge, would anyone have an idea on how I am supposed to handle this?
The entity lock duration combined with the "maxAutoLockRenewalDuration" setting already behaves like the proposed "sessionTimeToLive". By default the "maxAutoLockRenewalDuration" is set to 5 minutes, but you can set this to a lower value, (or 0 if you don't want the lock to be renewed at all).
Essentially, the max processing time for a session would be Max(LockDuration, MaxAutoLockRenewalDuration).
I want to know how to handle qldb sessions in a node.js application.
Should I create one session for the entire scope of the app or should I make a new session before each batch of transactions?
Right now I'm creating a session before each transaction and I'm getting some OCC conflicts when running unit tests (for each test a new session is created).
You should use as many sessions as needed to achieve the level of throughput required. Each session can run a single transaction, and each transaction has a certain latency. So, for example, if your transactions take 10ms, then you can do 100 transactions per second (1s = 1000ms and 1000/10 = 100). If you need to achieve 1000 TPS, you would then need 10 sessions.
The driver comes with a "pool" of sessions. So, each transaction should request a session from the pool. The pool will grow/shrink as required.
Each session can live no longer than ~15 minutes (there is some jitter). Thus, you should handle the case where using a session throws an exception (invalid session) and retry your operation (get a session, run the transaction).
In terms of OCC, I think that is quite likely unrelated to your usage of sessions. OCC means you read data in your transaction that was changed by the time you tried to commit. Usually this means you haven't setup the right indexes, so your reads are scanning all items in a table.
*i want to know that what happen when validity time expire, some think that know is change state IDLE and remove from session, but what kind of request we send to client *
The Tcc timer supervises an ongoing credit-control session in the
credit-control server. It is RECOMMENDED to use the Validity-Time
as input to set the Tcc timer value. In case of transient failures in the network, the Diameter credit-control server might change to Idle state. To avoid this, the Tcc timer MAY be set so that Tcc equals to 2 x Validity-Time. in case of the timers expire diameter need to send a notification to connected devices that this session is no longer active so delete info about this session. Diameter send Session Termination Request and delete all related data if this session, also from the database.
I have Web application on IIS server.
I have POST method that take a long time to run (Around 30-40min).
After period time the application stop running (Without any exception).
I set Idle timeout to be 0 and It is not help for me.
What can I do to solve it?
Instead of doing all the work initiated by the request before responding at all:
Receive the request
Put the information in the request in a queue (which you could manage with a database table, ZeroMQ, or whatever else you like)
Respond with a "Request recieved" message.
That way you respond within seconds, which is acceptable for HTTP.
Then have a separate process monitor the queue and process the data on it (doing the 30-40 minute long job). When the job is complete, notify the user.
You could do this through the browser with a Notification or through a WebSocket or use a completely different mechanism (such as by sending an email to the user who made the request).
This maybe related to platforms other than ColdFusion.
IIS 6 Log reports "time-taken" much longer (30 minutes) than 120 seconds set in Connection Timeout for several requests to ColdFusion page.
I assume that ColdFusion was unresponsive at the moment. I would like IIS to stop the request rather than wait this long.
Is there an IIS setting that would force this?
Not really because iis is no longer handling the request once it has been passed to cf. You could try playing with application pool timeout and see if you can get that to throw an error.
This scenario can also be considered as the slow HTTP DoS attack when caused by the client. IIS doesn't provide much protection against it (at least for slow POST body) because Microsoft considers it a protocol bug, not an IIS weakness. Although I think in this case it is your server doing it to itself.
Things to check:
You didn't mention whether it is the request that is slow or the
server's response. You could try tweaking your
MinFileBytesPerSec parameter if it's the response that is slow. By
default it will drop the connection if the client is downloading at
less than 240 bytes per second.
Remember, that 120 second IIS timeout is an idle timeout. As long as the client sends or receives a few bytes inside 120 seconds, that timer will keep getting reset.
You didn't mention if this long wait is happening on all pages or always in a few specific ones. It is possible that your CF script is making another external
connection, e.g. CFQUERY, which is not subject to CF timeouts, but to the timeouts
of the server it is connecting to. Using the timeout attribute inside CFQUERY may prevent this.
You also didn't mention what your Coldfusion settings are. Maybe the IIS timeout setting is being ignored by the Coldfusion
JRUN Connector ISAPI filter, so you should check the settings in
Coldfusion Administrator. Especially check if Timeout Requests
after has been changed. If its still at the default of 60
seconds, check your code to see if it has been overridden there, e.g.
<cfsetting requestTimeOut = "3600">
Finally, there is the matter of the peculiar behavior of CF's requestTimeout that you might have to workaround by replacing some cfscript tags with CFML.