Redis Session State EVAL TimeOut - azure

I am trying to use Redis Session State with my Windows Azure Cloud website. I am using the 1 GB Standard Tier. I am using the P1 Premium Database. I am testing on local host. My cache and website are located on EAST US.
I am storing 200 - 400 objects in session state, which include an order and its payments.
Here is the error:
An exception of type 'System.TimeoutException' occurred in Microsoft.Web.RedisSessionStateProvider.dll but was not handled in user code
Additional information: Timeout performing EVAL, inst: 0, mgr: Inactive, err: never, queue: 7, qu: 1, qs: 6, qc: 0, wr: 1, wq: 1, in: 0, ar: 0, IOCP: (Busy=0,Free=1000,Min=8,Max=1000), WORKER: (Busy=1,Free=4094,Min=8,Max=4095), clientName: XX
Here are my settings:
<sessionState mode="Custom" customProvider="MySessionStateStore">
<providers>
<add name="MySessionStateStore" type="Microsoft.Web.Redis.RedisSessionStateProvider" host="XX.redis.cache.windows.net" accessKey="XX" ssl="true" syncTimeout="3000" connectionTimeoutInMilliseconds="5000" operationTimeoutInMilliseconds="1000" retryTimeoutInMilliseconds="3000" />
</providers>
</sessionState>

late answer perhaps, but never the less...
in this case, it looks like the amount of info within the cache is your problem. Redis is fine tuned to retrieve a lot of small cached info, not a single huge string...
Another thing i'd suggest you to try is to increase the number min number of IOCP and worker threads... in my scenario (2 core machine) i figured out that the best number is 100...

Related

Azure Function StackExchange.Redis.RedisTimeoutException on StringGetAsync

I am running an azure function app on azure cloud , from time to time I am getting the fallowing error:
Exception while executing function: ****Timeout performing SETEX (10000ms), next: GET *****, inst: 140, qu: 0, qs: 0, aw: False, bw: SpinningDown, rs: ReadAsync, ws: Idle, in: 2939, serverEndpoint: *****:6380, mc: 1/1/0, mgr: 10 of 10 available, clientName: 4ad57eb720e9(SE.Redis-v2.6.66.47313), IOCP: (Busy=0,Free=1000,Min=6,Max=1000), WORKER: (Busy=69,Free=32698,Min=6,Max=32767), POOL: (Threads=69,QueuedItems=54,CompletedItems=8674751), v: 2.6.66.47313 (Please take a look at this article for some common client-side issues that can cause timeouts: https://stackexchange.github.io/StackExchange.Redis/Timeouts)
It doesn't look like there a specific reason for this to happen.
Any ideas why this happens ?
The bottle neck could either be network bandwidth or CPU cycles. Based on the log you shared, you have 69 work threads, so I would first check the CPU usage to be sure you aren't maxing it out.
You are likely not hitting network bandwidth issues assuming you are running on Azure and since the qs/qu values are 0 but there could be network glitches that are causing the timeouts too but should be transient and resolve on their own.

Azure website timing out after long process

Team,
I have a Azure website published on Azure. The application reads around 30000 employees from an API and after the read is successful, it updates the secondary redis cache with all the 30,000 employees.
The timeout occurs in the second step whereby when it updates the secondary redis cache with all the employees. From my local it works fine. But as soon as i deploy this to Azure, it gives me a
500 - The request timed out.
The web server failed to respond within the specified time
From the blogs i came to know that the default time out is set as 4 mins for azure website.
I have tried all the fixes provided on the blogs like setting the command SCM_COMMAND_IDLE_TIMEOUT in the application settings to 3600.
I even tried putting the Azure redis cache session state provider settings as this in the web.config with inflated timeout figures.
<add type="Microsoft.Web.Redis.RedisSessionStateProvider" name="MySessionStateStore" host="[name].redis.cache.windows.net" port="6380" accessKey="QtFFY5pm9bhaMNd26eyfdyiB+StmFn8=" ssl="true" abortConnect="False" throwOnError="true" retryTimeoutInMilliseconds="500000" databaseId="0" applicationName="samname" connectionTimeoutInMilliseconds="500000" operationTimeoutInMilliseconds="100000" />
The offending code responsible for the timeout is this:
`
public void Update(ReadOnlyCollection<ColleagueReferenceDataEntity> entities)
{
//Trace.WriteLine("Updating the secondary cache with colleague data");
var secondaryCache = this.Provider.GetSecondaryCache();
foreach (var entity in entities)
{
try
{
secondaryCache.Put(entity.Id, entity);
}
catch (Exception ex)
{
// if a record fails - log and continue.
this.Logger.Error(ex, string.Format("Error updating a colleague in secondary cache: Id {0}, exception {1}", entity.Id));
}
}
}
`
Is there any thing i can make changes to this code ?
Please can anyone help me...i have run out of ideas !
You're doing it wrong! Redis is not a problem. The main request thread itself is getting terminated before the process is completed. You shouldn't let a request wait for that long. There's a hard-coded restriction on in-flight requests of 230-seconds max which can't be changed.
Read here: Why does my request time out after 230 seconds?
Assumption #1: You're loading the data on very first request from client-side!
Solution: If the 30000 employees record is for the whole application, and not per specific user - you can trigger the data load on app start-up, not on user request.
Assumption #2: You have individual users and for each of them you have to store 30000 employees data, on the first request from client-side.
Solution: Add a background job (maybe WebJob/Azure Function) to process the task. Upon request from client - return a 202 (Accepted with the job-status location in the header. The client can then poll for the status of the task at a certain frequency update the user accordingly!
Edit 1:
For Assumption #1 - You can try batching the objects while pushing the objects to Redis. Currently, you're updating one object at one time, which will be 30000 requests this way. It is definitely will exhaust the 230 seconds limit. As a quick solution, batch multiple objects in one request to Redis. I hope it should do the trick!
UPDATE:
As you're using StackExchange.Redis - use the following pattern to batch the objects mentioned here already.
Batch set data from Dictionary into Redis
The number of objects per requests varies depending on the payload size and bandwidth available. As your site is hosted on Azure, I do not thing bandwidth will be much of a concern
Hope that helps!

Kafka enabled Azure Event Hub: Invalid session timeout in Receiver

I'm trying to use the exact code provided here to send/receive data from a Kafka enabled Azure Event Hub.
https://github.com/Azure/azure-event-hubs-for-kafka/tree/master/quickstart/dotnet/EventHubsForKafkaSample
I'm successful in sending messages to the event hub, but each time I try to initialize the receiver, I get this invalid session timeout error.
7|2018-11-14 19:10:52.967|ssarkar#consumer-1|SEND| [thrd:sasl_ssl://ssarkar-test.servicebus.windows.net:9093/bootstrap]: sasl_ssl://ssarkar-test.servicebus.windows.net:9093/0: Sent JoinGroupRequest (v0, 109 bytes # 0, CorrId 6)
7|2018-11-14 19:10:52.992|ssarkar#consumer-1|RECV| [thrd:sasl_ssl://ssarkar-test.servicebus.windows.net:9093/bootstrap]: sasl_ssl://ssarkar-test.servicebus.windows.net:9093/0: Received JoinGroupResponse (v0, 16 bytes, CorrId 6, rtt 24.28ms)
7|2018-11-14 19:10:52.992|ssarkar#consumer-1|REQERR| [thrd:main]: sasl_ssl://ssarkar-test.servicebus.windows.net:9093/0: JoinGroupRequest failed: Broker: Invalid session timeout: actions Permanent
The only timeout I am specifying is the request.timeout.ms, and I have tried without that as well, but the error won't go away. I have also tried using various values of session.timeout.ms and still the error persists.
There is some info online about making sure that the session timeout value falls within the min and max of the group timeout value. But I don't have a way to view the broker configs on Azure Event Hub, so I have no idea what they are supposed to be.
EH allows session timeouts between 6000 ms and 300000 ms. We also reject your join group request if the request's rebalance timeout is less than session timeout.
Quick note - we aren't actually running real Kafka brokers, so there is a bit of added complexity to exposing broker configs. However, we will update our Github repository with configuration values/ranges!
11/22/19 edit - configuration doc can be found here https://github.com/Azure/azure-event-hubs-for-kafka/blob/master/CONFIGURATION.md

IISNode debugger: Error during WebSocket handshake: Unexpected response code: 200

I've been having trouble getting the IISNode debugger to work with my project. I figured I'd start by getting it working with the example projects first but I'm still not having any luck. Using IISNode 0.2.21 and node.js 8.11.2 with IIS 10.0.14393.0.
When I try to open the debugger the entire process takes around 3 minutes and eventually fails with "WebSocket connection to 'ws://localhost/node/configuration/hello.js/debug/ws' failed: Error during WebSocket handshake: Unexpected response code: 200". See screenshot:
The web.config file found at c:\Program Files\iisnode\www\configuration\ reads as follows:
<configuration>
<system.webServer>
<!-- indicates that the hello.js file is a node.js application
to be handled by the iisnode module -->
<handlers>
<add name="iisnode" path="hello.js" verb="*" modules="iisnode" />
</handlers>
<!--
the iisnode section configures the behavior of the node.js IIS module
setting values below are defaults
* node_env - determines the environment (production, development, staging, ...) in which
child node processes run; if nonempty, is propagated to the child node processes as their NODE_ENV
environment variable; the default is the value of the IIS worker process'es NODE_ENV
environment variable
* nodeProcessCommandLine - command line starting the node executable; in shared
hosting environments this setting would typically be locked at the machine scope.
* interceptor - fully qualified file name of a node.js application that will run instead of an actual application
the request targets; the fully qualified file name of the actual application file is provided as the first parameter
to the interceptor application; default interceptor supports iisnode logging
* nodeProcessCountPerApplication - number of node.exe processes that IIS will start per application;
setting this value to 0 results in creating one node.exe process per each processor on the machine
* maxConcurrentRequestsPerProcess - maximum number of reqeusts one node process can
handle at a time
* maxNamedPipeConnectionRetry - number of times IIS will retry to establish a named pipe connection with a
node process in order to send a new HTTP request
* namedPipeConnectionRetryDelay - delay in milliseconds between connection retries
* maxNamedPipeConnectionPoolSize - maximum number of named pipe connections that will be kept in a connection pool;
connection pooling helps improve the performance of applications that process a large number of short lived HTTP requests
* maxNamedPipePooledConnectionAge - age of a pooled connection in milliseconds after which the connection is not reused for
subsequent requests
* asyncCompletionThreadCount - size of the IO thread pool maintained by the IIS module to process asynchronous IO; setting it
to 0 (default) results in creating one thread per each processor on the machine
* initialRequestBufferSize - initial size in bytes of a memory buffer allocated for a new HTTP request
* maxRequestBufferSize - maximum size in bytes of a memory buffer allocated per request; this is a hard limit of
the serialized form of HTTP request or response headers block
* watchedFiles - semi-colon separated list of files that will be watched for changes; a change to a file causes the application to recycle;
each entry consists of an optional directory name plus required file name which are relative to the directory where the main application entry point
is located; wild cards are allowed in the file name portion only; for example: "*.js;node_modules\foo\lib\options.json;app_data\*.config.json"
* uncFileChangesPollingInterval - applications are recycled when the underlying *.js file is modified; if the file resides
on a UNC share, the only reliable way to detect such modifications is to periodically poll for them; this setting
controls the polling interval
* gracefulShutdownTimeout - when a node.js file is modified, all node processes handling running this application are recycled;
this setting controls the time (in milliseconds) given for currently active requests to gracefully finish before the
process is terminated; during this time, all new requests are already dispatched to a new node process based on the fresh version
of the application
* loggingEnabled - controls whether stdout and stderr streams from node processes are captured and made available over HTTP
* logDirectory - directory name relative to the main application file that will store files with stdout and stderr captures;
individual log file names have unique file names; log files are created lazily (i.e. when the process actually writes something
to stdout or stderr); an HTML index of all log files is also maintained as index.html in that directory;
by default, if your application is at http://foo.com/bar.js, logs will be accessible at http://foo.com/iisnode;
SECURITY NOTE: if log files contain sensitive information, this setting should be modified to contain enough entropy to be considered
cryptographically secure; in most situations, a GUID is sufficient
* debuggingEnabled - controls whether the built-in debugger is available
* debuggerPortRange - range of TCP ports that can be used for communication between the node-inspector debugger and the debugee; iisnode
will round robin through this port range for subsequent debugging sessions and pick the next available (free) port to use from the range
* debuggerPathSegment - URL path segment used to access the built-in node-inspector debugger; given a node.js application at
http://foo.com/bar/baz.js, the debugger can be accessed at http://foo.com/bar/baz.js/{debuggerPathSegment}, by default
http://foo.com/bar/baz.js/debug
* debugHeaderEnabled - boolean indicating whether iisnode should attach the iisnode-debug HTTP response header with
diagnostics information to all responses
* maxLogFileSizeInKB - maximum size of a single log file in KB; once a log file exceeds this limit a new log file is created
* maxTotalLogFileSizeInKB - maximum total size of all log files in the logDirectory; once exceeded, old log files are removed
* maxLogFiles - maximum number of log files in the logDirectory; once exceeded, old log files are removed
* devErrorsEnabled - controls how much information is sent back in the HTTP response to the browser when an error occurrs in iisnode;
when true, error conditions in iisnode result in HTTP 200 response with the body containing error details; when false,
iisnode will return generic HTTP 5xx responses
* flushResponse - controls whether each HTTP response body chunk is immediately flushed by iisnode; flushing each body chunk incurs
CPU cost but may improve latency in streaming scenarios
* enableXFF - controls whether iisnode adds or modifies the X-Forwarded-For request HTTP header with the IP address of the remote host
* promoteServerVars - comma delimited list of IIS server variables that will be propagated to the node.exe process in the form of
x-iisnode-<server_variable_name> HTTP request headers; for a list of IIS server variables available see
http://msdn.microsoft.com/en-us/library/ms524602(v=vs.90).aspx; for example "AUTH_USER,AUTH_TYPE"
* configOverrides - optional file name containing overrides of configuration settings of the iisnode section of web.config;
the format of the file is a small subset of YAML: each setting is represented as a <key>: <value> on a separate line
and comments start with # until the end of the line, e.g.
# This is a sample iisnode.yml file
nodeProcessCountPerApplication: 2
maxRequestBufferSize: 8192 # increasing from the default
# maxConcurrentRequestsPerProcess: 512 - commented out setting
-->
<iisnode
node_env="%node_env%"
nodeProcessCountPerApplication="1"
maxConcurrentRequestsPerProcess="1024"
maxNamedPipeConnectionRetry="100"
namedPipeConnectionRetryDelay="250"
maxNamedPipeConnectionPoolSize="512"
maxNamedPipePooledConnectionAge="30000"
asyncCompletionThreadCount="0"
initialRequestBufferSize="4096"
maxRequestBufferSize="65536"
watchedFiles="*.js;iisnode.yml"
uncFileChangesPollingInterval="5000"
gracefulShutdownTimeout="60000"
loggingEnabled="true"
logDirectory="iisnode"
debuggingEnabled="true"
debugHeaderEnabled="false"
debuggerPortRange="5058-6058"
debuggerPathSegment="debug"
maxLogFileSizeInKB="128"
maxTotalLogFileSizeInKB="1024"
maxLogFiles="20"
devErrorsEnabled="true"
flushResponse="false"
enableXFF="false"
promoteServerVars=""
configOverrides="iisnode.yml"
/>
<!--
One more setting that can be modified is the path to the node.exe executable and the interceptor:
<iisnode
nodeProcessCommandLine=""%programfiles%\nodejs\node.exe""
interceptor=""%programfiles%\iisnode\interceptor.js"" />
-->
</system.webServer>
</configuration>
I've read that permissions can be an issue so I tried granting full access to IIS_IUSRS, DefaultAppPool, and Everyone but still hasn't helped.
I've also tried disabling websockets in web.config but that hasn't helped either.
<system.webServer>
<webSocket enabled="false" />
...
</system.webserver>
I feel like I'm so close to getting this to work but I'm missing one small piece. Any ideas? I've tried every link and forum post I could find and they all suggest doing the things I've listed above but to no avail.
Your help is greatly appreciated.
Thanks.

VC Admin + Azure Web Apps + Hangfire Job + Worker Process requested recycle due to 'Percent Memory' limit

When we start re-index catalog (~15000 products) in VC Admin, we can not finish the process because Azure automatically recycles Web Apps.
Error message:
Worker Process requested recycle due to 'Percent Memory' limit. Memory Used: 4273229824 out of 3757625344 available. Exceeded 90 Percent of Memory.
Web Apps Price plan is S2.
Please advise.
PS: Temporary workaround is "Increase price plan to S3".
It is cause because "Smart-cache" didn't use cache expiration.
How to solve this problem:
Update VirtoCommerce.Cache module to latest version.
Add follow section to platform Web.config
<system.runtime.caching>
<memoryCache>
<namedCaches>
<add name="memCacheHandle" physicalMemoryLimitPercentage="80" pollingInterval="00:00:30" />
</namedCaches>
</memoryCache>

Resources