I am currently tracing cannot fork() errors on my Ubuntu Server and I was able to pinpoint it to the pid.max value of 700 under /sys/fs/cgroup/pids/.
However, I can only able to set the values /system.slice/pids.max and /user.slice/pids.max - not pids.max. Plus, these reset after reboot to the value max which again enforces the global pids.max value.
Is it possible to simply change the it from 700 to something higher? root + sudo were of no help.
Is there another way to override this value?
After a long back and forth with my server hoster, they finally spilled the beans.
I was renting a V-Server with very beefy hardware for very little money. The catch? You can only run 700 concurrent tasks. That's where the pids.max value came from and overruled the TasksMax and numprocs values.
I think you're looking for DefaultTasksMax= directive from /etc/systemd/system.conf.
You can check the runtime value by issuing systemctl show -p DefaultTasksMax:
$ systemctl show -p DefaultTasksMax
DefaultTasksMax=19096
If you wish to change it, simply edit the respective directive line in /etc/systemd/system.conf. Similar directive (TasksMax=) exists to tweak this setting on per-unit basis.
Relevant documentation[0][1] snippets:
TasksMax=N
Specify the maximum number of tasks that may be created in the unit. This ensures that the number of tasks accounted for the unit (see above) stays below a specific limit. This either takes an absolute number of tasks or a percentage value that is taken relative to the configured maximum number of tasks on the system. If assigned the special value "infinity", no tasks limit is applied. This controls the "pids.max" control group attribute. For details about this control group attribute, see Process Number Controller.
The system default for this setting may be controlled with DefaultTasksMax= in systemd-system.conf(5).
DefaultTasksMax=
Configure the default value for the per-unit TasksMax= setting. See systemd.resource-control(5) for details. This setting applies to all unit types that support resource control settings, with the exception of slice units. Defaults to 15%, which equals 4915 with the kernel's defaults on the host, but might be smaller in OS containers.
Related
Is it possible to SET the interface statistics in Linux after it's been brought up? I'm dealing with rrdtool (mrtg) that gets upset by a daily ifdown and ifup which brings the interface counters back to zero. Ideally I would like to continue counting from where I left and setting the interface values to what they were before the interface went down seems to be the easiest path.
I checked writing to /sys/class/net/ax0/statistics/rx_packets but that gives a Permission Denied error.
netstat, ifup, ifconfig and friends don't seem to support changing these values either.
Anything else I can try?
You can't set the kernel counters, no - but do you really need to?
MRTG will usually graph a rate, based on the difference between samples. So your MRTG/RRD will store packets-per-second values every cycle (usually 5min but maybe 1min). When your device resets the counters, then MRTG will see the value apparently go backwards - which will be discounted as out of range, so one failed sample. But, the next sample will work, and a new rate be given.
If you're getting a big spike in the MRTG graph at the point of the reset, this will be due to an incorrect 'counter rollover' detection. You can prevent this by either setting the MRTG AbsMax setting (to prevent this high value from being valid) or (better) by using SNMPv2 counters (where a reset is more obvious).
If you set your RRD file to have a large enough heartbeat and XFF, then this one missing sample will be interpolated, and so your graphs (which, remember, show the rate rather than the total) will continue to look fine.
Should you need the total, it can be derived by sum(rate x interval) which is automatically done by the Routers2 frontend for MRTG/RRD.
I am trying to edit my torrc and make all of the nodes funnel through one country.
So far I am able to force the entry and exit nodes but don't know how to change the middle node... any ideas?
I have already tried "MiddleNodes" and "RelayNodes"
EntryNodes {us},{ca}
ExitNodes {us},{ca}
StrictNodes 1
It's possible to restrict to MiddleNodes per Tor docs: https://2019.www.torproject.org/docs/tor-manual.html.en
MiddleNodes node,node,…
A list of identity fingerprints and country
codes of nodes to use for "middle" hops in your normal circuits.
Normal circuits include all circuits except for direct connections to
directory servers. Middle hops are all hops other than exit and entry.
This is an experimental feature that is meant to be used by
researchers and developers to test new features in the Tor network
safely. Using it without care will strongly influence your anonymity.
This feature might get removed in the future. The HSLayer2Node and
HSLayer3Node options override this option for onion service circuits,
if they are set. The vanguards addon will read this option, and if
set, it will set HSLayer2Nodes and HSLayer3Nodes to nodes from this
set. The ExcludeNodes option overrides this option: any node listed in
both MiddleNodes and ExcludeNodes is treated as excluded. See the
ExcludeNodes option for more information on how to specify nodes.
Edit: See new answer by #user1652110 describing MiddleNodes option which was added in January 2019.
There is no option to do so. The closest option you can try is ExcludeNodes by using as large a list of country codes as you can come up with that doesn't include the countries you do want to use.
Also note, at the time of writing, limiting your circuits' entry and exit points to relays in the US and Canada might severely limit your performance, anonymity, and reliability since there just aren't that many high-bandwidth exits and guards in these two countries.
Trying to understand more about Native-Transport-Requests!
As we know these are cql requests and if limit exceeds the result will be all time blocked NTR.
My question is how do i monitor these requests in real time and get some kind of report on it.
I see some settings like max_queued_native_transport_requests and native_transport_max_threads. How these settings will have effect over all time blocked.
Have a look at JIRA-11363.
Also check this discussion for more info.
The recommendation is to start with the default values and tune from there. The default values are:
max_queued_native_transport_requests=1024
native_transport_max_threads: 128
Monitor you nodes and if you see an increasing number of blocked Native-Transport-Requests, then you need to increase max_queued_native_transport_requests.
Also, I think it's worth checking these discussions: 1, 2
I am running varnish v. 3.04 on a debian server. I have had varnish running on this server for a long while now and I am not having any problems with the installation except:
when I run varnishstat, my hit ratio is 0, and when I run varnishstat -1 it shows 0 client connections accepted.
There are values in other misc. items such as backend_busy, backend_reuse
The varnishtop utility shows activity as expected.
I am quite certain varnish is serving the data and even getting cache hits through the use of tools like http://www.isvarnishworking.com/
The site name is http://events.floydecovillage.com if you'd like to see for yourself.
I can add that I upgraded varnish from 3.0.2-3 to 3.0.4-1 in August of last year.
EDIT: I can also add that the server uptime displayed in the upper left hand corner of varnish stat is stuck on: 0+00:00:32
Is it possible that your hostname changed since Varnish was started? To support running multiple instances on a single host, Varnish allows you to give each instance a name which determines where it keeps its temporary files and other state. One of these files is the shared memory log (a file named _.vsm) which utilities such as varnishstat get information about the running Varnish instance.
If no -n whatever option is specified (either on the varnishd or varnishstat command line), it defaults to the current hostname of the machine. Check the /var/lib/varnish directory to find what names might have been used (each name will correspond to a subdirectory.) You can then run varnishstat -n whatever to view statistics of any specific instance.
Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial