Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
What are the processes that write to "/var/adm/messages"?
From what I gathered Syslogd does the job. Am i right?
Also I saw multiple files, messages, messages.0, messages.1 and so on. Why is it so?
Also is there any other system process that writes to these files?
Any help is highly appreciated.
Yes, processes that use the syslog framework send messages to syslogd, which reads /etc/syslog.conf to determine where (or if) the message should be written based on the facility and level of the message. For example, if syslog.conf has the entry
user.debug /var/log/mylog
then all messages of a higher level than debug (the lowest level) from processes of the user facility (i.e. non-system processes) will be sent to /var/log/mylog (man syslog.conf for full explanation including possible facilities and levels).
The /var/adm/messages.X files are created as /var/adm/messages is rotated by the logadm cron job (again, see the man pages for logadm and logadm.conf).
NOTE: This answer is based on Solaris experience; file locations and behavior may vary with other *NIX flavors.
You can find out yourself with dtrace:
http://dtracebook.com/index.php/File_Systems
Syscall write(2) by file name: dtrace -n 'syscall::write:entry {
#[fds[arg0].fi_pathname] = count(); }'
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last month.
Improve this question
I have a data api which could get stream data use rust as an independent service process, and plan to write several client process to read the data, because the client process have some function based on apache arrow datatype.
I think this might be a single producer multi consumer problem. What's the best practice for swap these apache arrow data between different processes with low latency?
The easiest way to do this is to send the file across a socket, preferably with something like Flight. That would be my recommendation until you prove that performance is insufficient.
Since these are on the same machine you can potentially use something like memory mapped files. You first open a memory mapped file (you'll need to know the size and I'm not sure exactly how to do this but you can easily find the size of the buffers and you can just make a conservative guess for how much you'll need for metadata) and then write to that file on the producer. Make sure to write data using the Arrow format with no compression. This will involve one memory copy from user space to kernel space. You would then send the filename to the consumers over a socket.
Then, on the consumers, you can open the file as a memory mapped file and work with the data (presumably in a read-only fashion since there are multiple consumers). This read will be zero-copy as the file should still be in the kernel's cache. If you are in a situation where the kernel is swapping however then this is likely to backfire.
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I want to view a status of a process among 2000 processes, Without using top and ps
commands.
The name of the process is tom.
A process don't have any name (only the program it is running has one, but see also pthread_setname_np(3)... you might have pathological cases like this). It has a pid (which is some integer number, like 1234, of type pid_t). See credentials(7) and fork(2) and execve(2). Use pidof(1) and pgrep(1) to find the pid of some process. An executable program (e.g. /bin/bash) can be run by several processes (or none, or only one).
You may use kill(2) with a zero signal number to check that the process exists.
Most importantly, you should consider using /proc/ (see proc(5) for more). For the process of pid 1234, see /proc/1234/ which has several files and subdirectories (notably /proc/1234/status and /proc/1234/maps). Try cat /proc/$$/status and cat /proc/$$/maps and stat /proc/$$/exe and ls -l /proc/$$/ in a terminal (then replace $$ by whatever pid is interesting for you).
The top and ps utilities (and also pidof, pgrep, ...) are using that /proc/ (which is the mean by which the Linux kernel shows information on processes, and on the system itself). And you can write your program (or script) doing that too and using /proc/. See also this.
From inside a program, you can explore /proc/ like you would explore other file trees, e.g. using stat(2), opendir(3), readdir(3), nftw(3) etc.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm given a project where the only objective is to monitor a network's NFS performance. I know there's a bunch of open source tools out there, but still I would like to get the basic idea behind in order to better tweak those around. So the network consists of some hundred linux systems and some thousand accounts with NFS mounted home dir's; the script can be pushed out to every station, server is also possible, if that way does any good. Afaik, essentially all the script should do is a few dd's and watch the IO rate over NFS. And my question is just what is the proper way of doing so? Do I add a new account to the system solely to run the scripts?Some general thoughts are greatly appreciated :)
Bonnie
A classical performances evaluation tool tests. The main program tests database type access to a single file (or a set of files if you wish to test more than 1G of storage), and it tests creation, reading, and deleting of small files which can simulate the usage of programs such as Squid, INN, or Maildir format email.
Relevance to NFS:: Performance testing, workload
DBench
Dbench was written to allow independent developers to debug and test SAMBA. It is heavily inspired of the original SAMBA tool : NetBench
As NetBench it allow to:
torture the file system
improve the network load independently of the disk IO
Measure performances
But it does not need as much hardware resources as NetBench to run.
Relevance to NFS::
IOZone
Performance tests suite. POSIX and 64 bits compliant. This tests is the file system test from the L.S.E. Main features
POSIX async I/O, Mmap() file I/O, Normal file I/O
Single stream measurement, Multiple stream measurement, Distributed file server measurements (Cluster)
POSIX pthreads, Multi-process measurement
Selectable measurements with fsync, O_SYNC
Latency plots
Relevance to NFS:: Performance testing. Good for exercising a given mount point under various load conditions.
ful detail can be found here . http://wiki.linux-nfs.org/wiki/index.php/Testing_tools
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
In the /proc file system, why is that some files have their maps file empty. Is it because no memory has been allocated to them or the data isn't available?
They must be allocated some memory, otherwise how are they running?
First, notice that all the pseudo-files /proc/1234/maps and /proc/self/maps have always a zero size, as reported by the stat(2) syscall and the ls command. However, they are sequentially readable (e.g. by the cat command, or with read(2) syscall, e.g. called by fgets). Try cat /proc/self/maps and ls -ls /proc/self/maps for example.
A probable reason for the /proc/*/maps files to have a 0 size is that computing their size means computing their content, and that could be expensive. So the kernel prefers to say 0 for their size. Think of them as being sort of pipes. You need to read them sequentially, they are not lseek(2)-able.
Read the proc(5) man page for details about /proc/; notice that it is using Unix permissions and ownership, so you cannot access a /proc/1234 directory if the process of pid 1234 is not yours.
And you might also have some zombie processes. These don't have any address space anymore, so I won't be surprised if their maps pseudo-file in /proc is truly empty (in the sense that reading it gives immediately an end-of-file condition), or even missing.
Remember that files under /proc are pseudo-files, in the sense that the kernel is providing them (and giving their data), and they don't involve any real disk I/O. In particular, reading them should be fast.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I have read about limiting size of directory - like creating big files, formatting,mount,.. etc.
But this all very complicated. Does exist utility or something else to set limit on already existing directory?
Quota is based upon filesystems, but you can always create a virtual filesystem and mount it on a specific (empty) directory with the usrquota and/or grpquota flags.
In steps this will be:
create the mount point
create a file full of /dev/zero, large enough to the maximum size you want to reserve for the virtual filesystem
format this file with an ext3 filesystem (you can format a disk space even if it is not a block device, but double check the syntax of every - dangerous - formatting command)
mount the newly formatted disk space in the directory you've created as mount point, e.g.
Code:
mount -o loop,rw,usrquota,grpquota /path/to/the/formatted/disk/space /path/of/mount/point
Set proper permissions
Set quotas
and the trick is done.
Tutorial here.
Original answer here
You could limit the quota on a filesystem. But it is not directory specific, but file system & user specific.
You might also consider developping your own user space file system using FUSE, but this will take your time.