I have a problem with varnish server. The main task of the varnish is to cache images. In the configuration, I specified TTL to 365d for images. What I've noticed, after one day I receive X-Cache MISS header.
After doing one more request is HIT, but after a day is MISS again. Why is that happening? The varnish service has 30 GB RAM memory available (100% usage) and uses 45 GB of virtual memory additionally. Images are being removed because of lack of space?
Most likely yes. Check the lru_nuked counter in varnishstat, if it is more than 0 then you have not enough space for caching.
Related
I have a very specific question regarding some interesting behaviour I observed some performing experiments with L1D cache miss rates for certain programs.
Basically, I tried to find out how high the L1D load miss rate of a Tomcat9 server is when it's running alone on an isolated core. Additionally, I wanted to compare this value to the miss rate when I invalidate the L1D cache using the IA32_FLUSH_CMD msr upon each context switch.
As you would expect, the miss rate is higher when I invalidate the cache.
Now here comes the interesing part: I tried the same thing running both my tomcat server and additionally an apache2 web server. The result was that the miss rate for the tomcat server is actually HIGHER than when I invalidate the cache on each context switch, which I don't really understand. I would expect it to be at most as high as the miss rate measure with invalidating the cache, because, while the web server may evict some (maybe even all) of the lines of the tomcat server, it should most likely keep some aswell. And even if it evicted all of the lines, I'd expect that to give me a similar miss rate as when I invalidate the cache.
Some important information about the system:
SMT is disabled, so no cache interference from other CPUs
For the tests I'm running benchmarks to send continuous requests to both servers.
i7-7700k (32kb l1d cache)
Ubuntu Server 20.04, Kernel v5.13
If you have any idea about why this happens, I'd really appreciate the input.
Thank you!
The node app that am working becomes very slow and not even responding sometimes. While checking the logs I found that there was a problem with the memory. My app uses all 1400 ram space, then i searched for the solution and found to increase the memory so i increased the max space to 6gb ram, but still the app hits 6 gb and get hanged for period of time then restarts. Which makes my app very slow. Is there any way to solve this problem like clearing the memory or some other solution to make it fast.
Note: am using sequelize queries for sql, I'm not sure whether the problem is because of that.
Thanks in advance
I'm running Node.js on a server with only 512MB of RAM. The problem is when I run a script, it will be killed due to out of memory.
By default the Node.js memory limit is 512MB. So I think using --max-old-space-size is useless.
Follows the content of /var/log/syslog:
Oct 7 09:24:42 ubuntu-user kernel: [72604.230204] Out of memory: Kill process 6422 (node) score 774 or sacrifice child
Oct 7 09:24:42 ubuntu-user kernel: [72604.230351] Killed process 6422 (node) total-vm:1575132kB, anon-rss:396268kB, file-rss:0kB
Is there a way to get rid of out of memory without upgrading the memory? (like using persistent storage as additional RAM)
Update:
It's a scraper which uses node module request and cheerio. When it runs, it will open hundreds or thousands of webpage (but not in parallel)
If you're giving Node access to every last megabyte of the available 512, and it's still not enough, then there's 2 ways forward:
Reduce the memory requirements of your program. This may or may not be possible. If you want help with this, you should post another question detailing your functionality and memory usage.
Get more memory for your server. 512mb is not much, especially if you're running other services (such as databases or message queues) which require in-memory storage.
There is the 3rd possibility of using swap space (disk storage that acts as a memory backup), but this will have a strong impact on performance. If you still want it, Google how to set this up for your operating system, there's a lot of articles on the topic. This is OS configuration, not Node's.
Old question, but may be this answer will help people. Using --max-old-space-size is not useless.
Before Nodejs 12, versions have an heap memory size that depends on the OS (32 or 64 bits). So, following documentations, on 64-bit machines that (the old generation alone) would be 1400 MB, far away from your 512mb.
From Nodejs12, heap size take care of system RAM; however Nodejs' heap isn't the only thing in memory, especially if your server isn't dedicated to it. So set the --max-old-space-size permit to have a limit regarding the old memory heap, and if your application comes closer, the garbage collector will be triggered and will try to free memory.
I've write a post about how I've observed this: https://loadteststories.com/nodejs-kubernetes-an-oom-serial-killer-story/
I am researching into the IIS Application Initialization module and from what I can see, when using the AlwaysRunning option for Start Mode setting for the application pool, basically it starts a new worker process that will always run even if there isn't any requests. When applying this option it starts the process automatically.
My concern is memory management and CPU usage, specifically how is this handled since the process always runs.
How can I compare this to setting the Start Mode to OnDemand and increase the Idle Time minutes to couple of days? That way, I guess, the process will run in idle mode for x days before it's terminated, and reinitialized on the next request and keep running for a couple of days. If I set the minutes to let's say 1.5 days, someone is bound to use the application at least once a day, so it will persist the process runtime and it will never be terminated.
Can anyone share experience regarding this topic?
Thanks
I have multisite application that runs few sites under separate app pools. All are set OnDemand for Start Mode and IdleTime for 1740 minutes, also I use Page Output Cache from app with different times for different page types. There is also NHibernate behind scene and DB is MySql.
The most active site have more than 100k visits per day and almost never is idle. When it starts if I recycle, need 30 seconds to 2 minutes to became full operable depending on requests at the moment and CPU usage is going from 40% to 70%. After the site is up CPU usage is very low (0-4%) if there are no new entries in DB and memory usage is around 3GB when all is cached. Sometimes CPU is going to 20% if at that moment are new request (for not cached content) and there is new entry saving.
Also Page Output Cache works on First Come First Served base so maybe this can also cause little problem while caching is done - user must wait, little more CPU to do the caching.
The most biggest problem in my case is using NHibernate and MySql but Page Output Cache resolved the problem for me when I decided to cache the page modules and content. I realize that is better application to starve for memory then for CPU.
3.5k visitors at one moment when everything is cached gave to me same memory usage (3GB) and CPU (server overall) around 40%
Other sites are using around 1-1.5GB memory and CPU never more then 20% at start.
The application with same settings for app pool and using MSSQL with EF I can't even notice that run on server. It is used by 10-60 users in minute there is not much content except embedding codes and it use 1-5% CPU and never more than 8MB memory. On recycle it is up for less then 10 seconds.
With this my experience I can tell you that all depends on what application serves and how it works :) and how much content do you have.
If you use OnDemand with long IdleTime it will be same as AlwaysStart and process is not used at that moment. If you use OnDemand with short IdleTime more often you will need CPU to start the process.
I am running around eight solr servers (version 3.5) instances behind a Load Balancer. All servers are identical and the LB is weighted by number connections. The servers have around 4M documents and receive a constant flow of queries. When the solr server starts, it works fine. But after some time running, it starts to take longer respond to queries, and the server I/O goes crazy to 100%. Look at the New Relic graphic:
If the servers behaves well in the beginning, I it starts to fail after some time? Then if I restart the server, it gets back to low I/O for same time and this repeats over and over.
The answer to this question is related to the content in this blog post.
What happens in this case is that queries are highly dependent of reading solr indexes. These indexes are in disk, to I/O i high. To optimize disk accesses, Linux OS creates a cache in memory for the most accessed disk areas. It uses free memory (not occupied my applications) for this cache. When the memory is full, the server needs to read from disks again. For this reason, when solr restarts, JVM occupies less memory and there is more free space for disk cache.
(The problem is happening in a server with 15Gb RAM and a 20Gb solr index)
The solution is to simple increase the server's RAM, so the whole index fits into memory and no I/O is required.