How should I calculate cache miss rate of RISC-V rocket chip? - riscv

After I ran the code in the rocket chip emulator, all I got is the output log of cycles and the retired instructions. How should I get the information about total cache access and cache miss? Is there any other output files other the default one?

No. If there is a need for these information, you should modify the source file to support some performance counters. The cache access for both hit and miss in L1 and L2 should be extracted from TileLink transactions. And then transport them to core and insert them to CSRs.

Related

L1D cache miss behaviour and Linux scheduling

I have a very specific question regarding some interesting behaviour I observed some performing experiments with L1D cache miss rates for certain programs.
Basically, I tried to find out how high the L1D load miss rate of a Tomcat9 server is when it's running alone on an isolated core. Additionally, I wanted to compare this value to the miss rate when I invalidate the L1D cache using the IA32_FLUSH_CMD msr upon each context switch.
As you would expect, the miss rate is higher when I invalidate the cache.
Now here comes the interesing part: I tried the same thing running both my tomcat server and additionally an apache2 web server. The result was that the miss rate for the tomcat server is actually HIGHER than when I invalidate the cache on each context switch, which I don't really understand. I would expect it to be at most as high as the miss rate measure with invalidating the cache, because, while the web server may evict some (maybe even all) of the lines of the tomcat server, it should most likely keep some aswell. And even if it evicted all of the lines, I'd expect that to give me a similar miss rate as when I invalidate the cache.
Some important information about the system:
SMT is disabled, so no cache interference from other CPUs
For the tests I'm running benchmarks to send continuous requests to both servers.
i7-7700k (32kb l1d cache)
Ubuntu Server 20.04, Kernel v5.13
If you have any idea about why this happens, I'd really appreciate the input.
Thank you!

How to limit tabnine's memory usage?

I install the tabnine extension in vscode, the tool is great however it occupy memory over 1G. My computer is lagging. Is there any way to limit tabnine's memory usage or general vscode extensions' memory usage
In linux is possible with the tool called "timeout". When you start vscode automatically is created one specify process for tabnine. With this tool is possible limit the RAM usage of one process
If you choose to use Tabnine Cloud, Tabnine will send blocks of code from your edited files to our server, allowing us to provide deep completion suggestions. These blocks of code will never be stored - they are used to calculate predictions and then immediately discarded.
We recommend the cloud version to optimize RAM usage if you have a stable internet connection.
Ref: Optimize RAM usage

How to monitor performance of rocket core?

In rocket/RocketCore.scala
There exists performance counter which describes cache misses, load, or store.
How can I see this information after rocket core finishes its running?
Could you give me an example on how to do this?
As far as I know, there's nothing on this guide.

Usage of Redis on Azure

I'm using Redis cache on the Azure.The Pricing tier of it as Standard 2.5 GB.So my question is, can you tell me how to see the current usage of memory on the cache ? In other words how much of more cache storage remaining for using in the future ? I have tried to find out it on the dash board. But unable to find out it.
You can configure redis cache diagnostics to get this information. Please refer to How to monitor Azure Redis Cache - Available metrics and reporting intervals for more details. From this link, one of the available metrics is Used Memory which I believe you're looking for.
Used Memory The amount of cache memory used for key/value pairs in the
cache in MB during the specified reporting interval. This value maps
to used_memory from the Redis INFO command. This does not include
metadata or fragmentation.
I have not used REDIS Cache personally but if my memory serves me right, I read somewhere that you can find this information by executing REDIS commands through REDIS Console available in the portal as well. For more information about this, please see this link: https://azure.microsoft.com/en-in/documentation/articles/cache-configure/#redis-console.
Run INFO memory command in Redis Console and look for used_memory_human parameter in output.

Redis cache in Azure was cleared unexpectadly

Recently, January 3rd, we observed interesting behavior with Redis Cache in Azure. It happened just once, and I'm trying to make sense of it.
We got alert that CPU went above 80% on Redis Cache service. Looking closely we discovered that used memory was dropped from typical 100MB to almost 0. Then it was quickly populated back to normal, I assume by normal usage of the application. While it was being populated, there was this CPU spike.
It looked like if cache was reset. However, this is production environment with very limited people having access to it, and we sure 100% that nobody reset it. There were no any deployment around that time. I couldn't find anything in diagnostic logs.
Questions:
1. Any ideas what could happen?
2. Where can I look, what to look for?
Update: We are on standard (C1) tier
No customers reported any problems, I just hate when I don't understand what is going on.
It depends on which cache tier you are using.
The basic tier only has one node with the cache data stored in memory. Any loss of memory in that node will cause the cache data to be lost.
If you are using the Standard tier then there are 2 nodes, a primary and secondary, with cached data being asynchronously replicated from primary to secondary. If the primary is offline then client requests are sent to the secondary. In this scenario the chance of cache data loss is low since it basically requires both nodes to be offline at the same time, which should only happen during scenarios of hardware failure (Azure ensures that normal updates maintenance such as OS updates are not done at the same time).
If you are using the premium tier then the cache data is backed by persistent storage so you should not experience cache data loss.
https://azure.microsoft.com/en-us/documentation/articles/cache-faq/#what-redis-cache-offering-and-size-should-i-use has some more information about this.

Resources