Limit Chromium cache size in Linux - linux

I need to limit the cache size of Chromium in my debian computer. I've tried to edit the master-preferences in order to solve this problem, but every time I reopen the browser this file restore its original values.
How can I modify this values to have for example a limit of 10M cache everytime?

Absolutely. An easy fix to this is to add the following argument to the command.
chromium-browser --disk-cache-size=n
say n is 500000000 this would be 500 MB
You can check to make sure it increased it by typing the following in your browser and then looking at the Max Size value.
chrome://net-internals/#httpCache
Please see https://askubuntu.com/questions/104415/how-do-i-increase-cache-size-in-chrome/104429#104429

Related

The extension package size exceeds the maximum package size limit

I made a custom extension task in VS Code. And, I'm facing a problem while uploading the extension package to Marketplace. It's showing an error: "The extension package size '32440237 bytes' exceeds the maximum package size '26214400 bytes" as my extension size is ~32MB.
When I looked deep, then I got like node_modules folder (where all packages are present) size is increasing if I install some external packages like:
azure-pipelines-task-lib
azure-pipelines-tasks-azure-arm-rest-v2
#pulumi/azure
I also tried the solution which was given in this. But, no luck.
I'm worried, is there any way to decrease or compress the size of node_modules or extension-package.
or,
How the size is increasing for the node_modules folder?
if anyone has knowledge on this, please let me know.
Thanks in Advance!!
Well, I got the answer to this.
First thing, in VS Marketplace, the default size of uploading the extension package is 25MB.
So, if your extension package exceeds the max size limit, then don't worry, just do one thing:
Mail your issue to this mail id: vsmarketplace#microsoft.com. Then, one of from support team will contact you within 24hrs-48hrs.
Lastly, they will ask you the reason for extending the size limit, then you have to give the proper reason. Then, bingo!!
Your issue will resolve within 2-3 days max.

Script should run by own during server startup

We are using JON tool to monitor our infrastructure .we set up the threshold for RAM usage(60%,65% of total RAM)using tool GUI.
In case if server(which is in in cloud ) RAM size is increased we need to manually change the threshold level using GUI .To avoid that I wrote a shell script which uses JON CLI to update the threshold of RAM (based on current RAM size), script is working and no problem in that.
For example, initially if RAM size is 8 gb we set up the threshold (65% from 8gb) based on current size. Due to some need if they increase the size to 16 GB we need to set up the threshold(65% from 16GB) manually.To avoid that I created shell script which uses JON CLI. to update threshold value( during maintenance they shut down the servers and increase the RAM size as per their need.)
Problem:
If the server size is increased I need to run the script manually to set the threshold. Since they are bringing the server to down during size changes , the script need to run by own once they started the server. So I placed my script in /etc/rc.local file Recently the team has increased the RAM size and started the server but there is no change in threshold (which means script doesn't run by own). Thus i ran the script manually to update the threshold
Expectation:
Script should run by own during server start up.
Flavour:centos(6.5)
Even though it is basic thing please guide and help on this.
If I understand the problem correctly,
you script does not start from /etc/rc.local.
Please check, if /etc/rc.local is executed.
For that add something like:
touch /tmp/created-by-rc.local
to it and restart the server.
After that you will know, if /etc/rc.local is started
and depending on that you can go further this or that way.
Also, you can create an own start script for your script.
Check this article, where the procedure is described in details:
https://techarena51.com/index.php/how-to-create-an-init-script-on-centos-6/

rsync hang in the middle of transfer with a fixed position

I am trying to use rsync to transfer some big files between servers.
For some reasons, when the file is big enough (2GB - 4GB), the rsync would hang in the middle, with the exactly same position, i.e., the progress at which it hanged always stick to the same place even if I retried.
If I remove the file from the destination server first, then the rsync would work fine.
This is the command I used:
/usr/bin/rsync --delete -avz --progress --exclude-from=excludes.txt /path/to/src user#server:/path/to/dest
I have tried to add delete-during and delete-delay, all have no luck.
The rsync version is rsync version 3.1.0 protocol version 31
Any advice please? Thanks!
Eventually I solved the problem by removing compression option: -z
Still don't know why is that so.
I had the same problem (trying to rsync multiple files of up to 500GiB each between my home NAS and a remote server).
In my case the solution (mentioned here) was to add to "/etc/ssh/sshd_config" (on the server to which I was connecting) the following:
ClientAliveInterval 15
ClientAliveCountMax 240
"ClientAliveInterval X" will send a kind of message/"probe" every X seconds to check if the client is still alive (default is not to do anything).
"ClientAliveCountMax Y" will terminate the connection if after Y-probes there has been no reply.
I guess that the root cause of the problem is that in some cases the compression (and/or block diff) that is performed locally on the server takes so much time that while that's being done the SSH-connection (created by the rsync-program) is automatically dropped while that's still ongoing.
Another workaround (e.g. if "sshd_config" cannot be changed) might be to use with rsync the option "--new-compress" and/or a lower compression level (e.g. "rsync --new-compress --compress-level=1" etc...): in my case the new compression (and diff) algorithm is a lot faster than the old/classical one, therefore the ssh-timeout might not occur than when using its default settings.
The problem for me was I thought I had plenty of disk space on a drive but the partition was not using the whole disk and was half the size of what I expected.
So check the size of the space available with lsblk and df -h and make sure the disk you are writing to reports all the space available on the device.

When does memcached remove items?

I'm relatively new to memcached and only know the basics of getting it set up and working. I've run into an issue on our Magento-based website where the cache is growing too large and causing some slowness when editing product details. I telnetted to the memcached server and ran stats and noticed that there were nearly 900 megs and over 65500 items in there. I typed the flush_all command and re-ran stats and it's still the same. After some research I have found that flushing it invalidates the entries but doesn't actually free up the space. It will do so over time as new items are added. From what I have seen, it never frees up the nearly 900 megs of space and never deletes the 65000+ items that seem to be stuck in there. I haven't tried restarting memcached yet as this is a live site and I don't want to cause any problems. If restarting the server frees up the space, that's still not a solution because I don't want to have to do that every time. Can someone please help me understand what's going on and how I can fix this?
You'll want to tweak the maximum amount of memory Memcached is allowed to use in an instance. At the command line, the -m flag is used to set the maximum number of megabytes the cache can hold. Flushing the cache merely invalidates everything in it, and the items are evicted lazily. If you want memcached to use less memory, there's no getting around it: you'll have to restart it with less memory.

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

Resources