I am running varnish v. 3.04 on a debian server. I have had varnish running on this server for a long while now and I am not having any problems with the installation except:
when I run varnishstat, my hit ratio is 0, and when I run varnishstat -1 it shows 0 client connections accepted.
There are values in other misc. items such as backend_busy, backend_reuse
The varnishtop utility shows activity as expected.
I am quite certain varnish is serving the data and even getting cache hits through the use of tools like http://www.isvarnishworking.com/
The site name is http://events.floydecovillage.com if you'd like to see for yourself.
I can add that I upgraded varnish from 3.0.2-3 to 3.0.4-1 in August of last year.
EDIT: I can also add that the server uptime displayed in the upper left hand corner of varnish stat is stuck on: 0+00:00:32
Is it possible that your hostname changed since Varnish was started? To support running multiple instances on a single host, Varnish allows you to give each instance a name which determines where it keeps its temporary files and other state. One of these files is the shared memory log (a file named _.vsm) which utilities such as varnishstat get information about the running Varnish instance.
If no -n whatever option is specified (either on the varnishd or varnishstat command line), it defaults to the current hostname of the machine. Check the /var/lib/varnish directory to find what names might have been used (each name will correspond to a subdirectory.) You can then run varnishstat -n whatever to view statistics of any specific instance.
Related
I am currently tracing cannot fork() errors on my Ubuntu Server and I was able to pinpoint it to the pid.max value of 700 under /sys/fs/cgroup/pids/.
However, I can only able to set the values /system.slice/pids.max and /user.slice/pids.max - not pids.max. Plus, these reset after reboot to the value max which again enforces the global pids.max value.
Is it possible to simply change the it from 700 to something higher? root + sudo were of no help.
Is there another way to override this value?
After a long back and forth with my server hoster, they finally spilled the beans.
I was renting a V-Server with very beefy hardware for very little money. The catch? You can only run 700 concurrent tasks. That's where the pids.max value came from and overruled the TasksMax and numprocs values.
I think you're looking for DefaultTasksMax= directive from /etc/systemd/system.conf.
You can check the runtime value by issuing systemctl show -p DefaultTasksMax:
$ systemctl show -p DefaultTasksMax
DefaultTasksMax=19096
If you wish to change it, simply edit the respective directive line in /etc/systemd/system.conf. Similar directive (TasksMax=) exists to tweak this setting on per-unit basis.
Relevant documentation[0][1] snippets:
TasksMax=N
Specify the maximum number of tasks that may be created in the unit. This ensures that the number of tasks accounted for the unit (see above) stays below a specific limit. This either takes an absolute number of tasks or a percentage value that is taken relative to the configured maximum number of tasks on the system. If assigned the special value "infinity", no tasks limit is applied. This controls the "pids.max" control group attribute. For details about this control group attribute, see Process Number Controller.
The system default for this setting may be controlled with DefaultTasksMax= in systemd-system.conf(5).
DefaultTasksMax=
Configure the default value for the per-unit TasksMax= setting. See systemd.resource-control(5) for details. This setting applies to all unit types that support resource control settings, with the exception of slice units. Defaults to 15%, which equals 4915 with the kernel's defaults on the host, but might be smaller in OS containers.
since a few days we encounter a problem with our ArangoDB installation. A few minutes/up to an hour after start up all connections to the database are refused. The arango log file says that there are "Too many open files". A "lsof | grep arango | wc -l" shows that the database has around 50,000 open file handles, which is a lot under the max. allowed by the linux system (around 3m).
Has anyone an idea where this error comes from?
We are using a Ubuntu Linux with a 3.13 kernel. 30 GB RAM and three cores. The database is still very small with around 1,5m entries and a size of 50GB.
Thx, secana
EDIT:
"netstat -anpt | fgrep 2480" shows:
root#syssec-graphdb-001-test:~# netstat -anpt | fgrep 2480
tcp 0 0 10.215.17.193:2480 0.0.0.0:* LISTEN 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.30:53453 ESTABLISHED 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.31:49299 ESTABLISHED 7741/arangod
tcp 0 0 10.215.17.193:2480 10.215.50.30:53155 ESTABLISHED 7741/arangod
"ulimit -n" has a result of 1024, so I think that the ~50,000 are all arango processes together.
Last lines in log file before the database died:
2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': 'Too many open files'
2015-05-26T12:20:43Z [9672] ERROR cannot open datafile '/data/arangodb/databases/database-235999516/collection-28464454696/datafile-18806474509149.db': Too many open files
2015-05-26T12:20:43Z [9672] DEBUG [arangod/VocBase/collection.cpp:1632] cannot open '/data/arangodb/databases/database-235999516/collection-28464454696', check failed
2015-05-26T12:20:43Z [9672] ERROR cannot open document collection from path '/data/arangodb/databases/database-235999516/collection-28464454696'
It looks like it will make sense to increase the max. number of open files a process is allowed to manage. Given the stated database size of around 50 GB, the (presumably default) value of 1024 seems to be too low.
arangod will require one file descriptor for each parallel client connection. That may not be many, but in the face of HTTP keep-alive connections this could already account for several file descriptors.
Additionally, each datafile of an active collection will need to be memory-mapped and cost one file descriptor as well. With the default datafile size of 32 MB, a database size of 50 GB (on disk) will already consume 1,600 file descriptors:
50 GB database size / (32 MB default size / 1 datafile) = 1600 datafiles
Increasing the ulimit -n value for the arangod user and environment therefore will make sense. You can confirm that arangod can actually use the configured number of file descriptors by starting it with option --server.descriptors-minimum <value>, e.g.
--server.descriptors-minimum 32768
for that many file descriptors. If arangod cannot effectively use that specified amount of file descriptors, it will fail at start with a fatal error. Of course that option can also be put into the arangod.conf file.
Additionally, the default size for (new) datafiles can be increased via the journalSize parameter for collections. That won't help right now, but will lower the number of required file descriptors for data saved in the future.
For emergencies when you can't restart the database, like in my case, you will find very useful this blog post that explains how you can change the ulimit of a running process.
If your distribution has util-linux-2.21, you can use the "prlimit" tool, or you can compile the small example C program in the blog post that worked great for me.
To check the actual limits of a process you can use:
cat /proc/<PID>/limits
Good luck!
How to see last logged via SSH with "last" command?
I mean the last 10 days.
It shows for me only last two days even if I use last -n 1000
Or maybe my logs contain only last two days so how eventually check that and increase this value?
You'll need to check /etc/logrotate.conf Here's the relevant portion of one of my servers.
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
if your server is rotating files out and you want to look at what was in the previous month then use the last -f command.
ls /var/log/wtmp*
last -f /var/log/wtmp-20140902 (or whatever the filename is to examine)
log rotation and renaming are distribution dependent. (thanks David C. Rankin)
lastly (no pun intended) you can always do a
man last
and get all the potential command line switches.
The information of who logged in when is available in /var/log/auth.log (or other log files on other distributions). There are multiple log monitoring programs that can extract the information you configure as relevant. On any sane system, every user authentication is logged.
If the accounting subsystem is up and running, then lastcomm shows information about finished processes.
I am new to Linux and I had to set DISPLAY variable for running a Java application. Somehow I managed to do that, and I understand that display can be set using
<host>:<display>[.<screen>]
but what I am doing is <host>:1001.
Now, this 1001 is 1001th display of this Linux? Are this many display possible in a machine or my understanding is wrong?
The DISPLAY variable is used by X11 to identify your display (and keyboard and mouse). Usually it'll be :0 on a desktop PC, referring to the primary monitor, etc.
If you're using SSH with X forwarding (ssh -X otherhost), then it'll be set to something like localhost:10.0. This tells X applications to send their output, and receive their input from the TCP port 127.0.0.1:6010, which SSH will forward back to your original host.
And, yes, back in the day, when "thin client" computing meant an X terminal, it was common to have several hundred displays connected to the same host.
The DISPLAY values are usually like :0, :0.0, etc. when running under the X Window server on the same host. Big numbers like in :1001 are typical for SSH passed X connection. The numbers are really summand to 6000 to get TCP port number; local ones start with 6000 and SSH passed ones could start with 7000. (This augment is different in different systems, e.g. 10 or 100 are also possible.)
As soon as these values are assigned dynamically, you should get the value for DISPLAY from an existing connection environment, provided that proper authorization token is also available (e.g. in ~/.Xauthority).
It's a remote server so no GUI or fancy stuff installed, I connect to that host using SSH.
For security reasons I suppose, I cannot use the 'date -s' command to change the local server's current time.
$ cat /etc/issues ==> Ubuntu 10.04 LTS
$ uname -r ==> 2.6.32-042stab037.1
$ cat $SHELL ==> /bin/bash
'date' shows me time that is about 10 minutes early, I tried linking the right time from /usr/share/zoneinfo (New york in my case) to /etc/localtime, but nothing really changed, my clock is still 10 minutes unsynchronized.
Do I have to generate a new time zone binary using zic? Yes? how? no? What else could I try ?
Thanks a bunch in advance.
/s
Use adjtimex(8) to set the system clock of a running system. It will smoothly adjust the time and not leave any scary gaps as date would. The syntax for using it is quite beyond me, however, so I just use ntp and forget it.
If your clock issues persists accross reboots you will need to reset the hardware clock with hwclock as well.
Did you think about installing a ntp daemon to keep your time synchronized to avoid those time slicing ?
Ntpd is a good starting point (it is provided in debian packages ntp and openntpd)
You'll have to set a server reference (usually ntp.pool.org, but you can change it if you need) and then, launch it on machine startup.... and let him do the job (ie : keep the time synchronized)