I am getting error while using Redis cache time=0. Same time using with Redis cache time=1, it is working as expected.
How to set Redis cache time value is 0. Please help
ErrorMessage
"Message":"An error has occurred.","ExceptionMessage":"invalid expire time in SETEX, sPort: 12702, LastCommand: ","ExceptionType":"ServiceStack.Redis.RedisResponseException"
I want set 0 as expire time, why because i am using dynamic page and it has many chunk. The Redis cache time coming from config file. Example: Chunk 1 with redis cache time of 2 minute. In particular time i dont want redis cache, that time i go and change 0 as redis cache time in configuration file.
In particular time i dont want redis cache, that time i go and change 0 as redis cache time in configuration file.
It seems that you don't want to store a key by commanding redis to store a key. Which is very inconvenient.
If you don't want to change your application code than you could save it just for 1 sec, which is minimal. As setex command expect the time is positive means greater than zero.
Otherwise, you can tweak your code by ignoring to store in cache while ttl is zero. Or you can save it for 1 milisecond in redis using psetex insted of setex.
Related
I am using client.setex() to create a Redis cache with an expiration time of say 3600 seconds but I also update it from time to time, and for that I am using client.set().
Wondering if the expiration time stays the same or gets extended by 3600 every time I update that cache.
set() will remove the existing expiration time in Redis .
you can use :
client.set(key,value, "EX", client.ttl(key))
for continuing the expiration time.
for more detail :
https://stackoverflow.com/a/21897851/5677187
I'm running a node web server using express module and would like to include the following features in it:
primarily, track every visitors source IP, time and unique or repeated visit by saving it to a JSON file.
secondly, if someone is hitting my server more than 10 times in last 15 seconds looking for vulnerabilities (non-existent pages) then collect those attempts in a buffer (that holds 30 seconds worth of data) and once threshold is reached, start blocking that source IP for X number of hours.
I'm interested in finding out the fastest way to save this information with very minimal performance penalty.
My choice so far is to create a RAMDISK and save this info into a continuous file on that RAMDISK.
The info for Visitor info gets written to a database every few minutes.
The info for notorious visitors will be reset every 30 seconds so as to keep the lookup quick.
The question I have is - Is writing to RAMDISK the fastest way to retain information (so its not lost during a crash) or is there a better/faster way to achieve this goal ?
I'm trying to make a expire key system with nodejs to one app and for check expiration (30 days) I decided use a while loop and checking it, but as I thought, I need another thread to do this, I tried use the worker_thread but gives me "Module did not self-register" at "node_modules\Canvas\lib\bindings.js:3:18". There is a way to do this process or still another way to do the keys expire? I'm new to node and I don't have any other ideia.
Obs:. to check if a key is expired, it store the time in millis that the key was got and store another value with the time in millis 30 days after, and do a simple (expireTime - gotTime <= 0)
We want to cache some entries (i.e. depending upon predicate) in continuous query cache on client for IMap. But we want to send update to CQC only after some delay seconds (i.e. 30 sec) even if the entries receives like 100 updates per sec. This we can achieve by setting delay seconds to 30 seconds & coalescing to true.
QueryCacheConfig cqc = new QueryCacheConfig();
cqc.setDelaySeconds(30);
cqc.setCoalesce(true);
cqc.setBatchSize(30)
CQC fits perfectly well for the above use case.
But we are noticing CQC is not receiving updates after delay seconds until batch size capacity is not reached. Is this is the expected behavior?
We thought CQC will receive the latest updated value for entries after delay seconds or batch size reached its capacity.
delaySeconds and batchSize 'OR' relation. Updates are pushed to caches either when batchSize is reached or delaySeconds are passed. If coalesce is true, then only latest update of a key is pushed to cache.
We have noticed some issues when testing with intellij. please try using another IDE if you are using intellij
Say I have about 150 requests coming in every second to an api (node.js) which are then logged in Redis. At that rate, the moderately priced RedisToGo instance will fill up every hour or so.
The logs are only necessary to generate daily\monthly\annual statistics: which was the top requested keyword, which was the top requested url, total number of requests daily, etc. No super heavy calculations, but a somewhat time-consuming run through arrays to see which is the most frequent element in each.
If I analyze and then dump this data (with a setInterval function in node maybe?), say, every 30 minutes, it doesn't seem like such a big deal. But what if all of sudden I have to deal with, say, 2500 requests per second?
All of a sudden I'm dealing with 4.5 ~Gb of data per hour. About 2.25Gb every 30 minutes. Even with how fast redis\node are, it'd still take a minute to calculate the most frequent requests.
Questions:
What will happen to the redis instance while 2.25 gb worth of dada is being processed? (from a list, I imagine)
Is there a better way to deal with potentially large amounts of log data than moving it to redis and then flushing it out periodically?
IMO, you should not use Redis as a buffer to store your log lines and process them in batch afterwards. It does not really make sense to consume memory for this. You will better served by collecting your logs in a single server and write them on a filesystem.
Now what you can do with Redis is trying to calculate your statistics in real-time. This is where Redis really shines. Instead of keeping the raw data in Redis (to be processed in batch later), you can directly store and aggregate the statistics you need to calculate.
For instance, for each log line, you could pipeline the following commands to Redis:
zincrby day:top:keyword 1 my_keyword
zincrby day:top:url 1 my_url
incr day:nb_req
This will calculate the top keywords, top urls and number of requests for the current day. At the end of the day:
# Save data and reset counters (atomically)
multi
rename day:top:keyword tmp:top:keyword
rename day:top:url tmp:top:url
rename day:nb_req tmp:nb_req
exec
# Keep only the 100 top keyword and url of the day
zremrangebyrank tmp:top:keyword 0 -101
zremrangebyrank tmp:top:url 0 -101
# Aggregate monthly statistics for keyword
multi
rename month:top:keyword tmp
zunionstore month:top:keyword 2 tmp tmp:top:keyword
del tmp tmp:top:keyword
exec
# Aggregate monthly statistics for url
multi
rename month:top:url tmp
zunionstore month:top:url 2 tmp tmp:top:url
del tmp tmp:top:url
exec
# Aggregate number of requests of the month
get tmp:nb_req
incr month:nb_req <result of the previous command>
del tmp:nb_req
At the end of the month, the process is completely similar (using zunionstore or get/incr on monthly data to aggregate the yearly data).
The main benefit of this approach is the number of operations done for each log line is limited while the monthly and yearly aggregation can easily be calculated.
how about using flume or chukwa (or perhaps even scribe) to move log data to a different server (if available) - you could store log data using hadoop/hbase or any other disk based store.
https://cwiki.apache.org/FLUME/
http://incubator.apache.org/chukwa/
https://github.com/facebook/scribe/