Disk full log/nginx ¿how to clear? - linux

I need your help, I currently have a full disk.
It happens that I just checked in my ssh
cd/var/log/nginx/
he says ls (He gave me these results)
access.log domain.acc.log
error.log domain.err.log
Then ls -lh (I showed this result)
-rw-r-r-- 1 root root 0 access.log
-rw-r-r-- 1 root root 3.6K error.log
-rw-r-r-- 1 root root 27G domain.acc.log
-rw-r-r-- 1 root root 7.5M domain.err.log
This is where I realized that the part of -rw-r-r-- 1 root root 27G domain.acc.log, has 27 gb?
I would like to eliminate this. could someone help me how to do it? without making a mistake?
I use linux

welcome to Stack Overflow!
This question likely belongs to another community, such as Server Fault and may be migrated.
However, you can empty the domain.acc.log while the process is running (assuming that you don't need to keep the data) by running the command echo "" > /var/log/nginx/domain.acc.log.
To break down what you are doing, the echo "" means you are outputting an empty string, then > takes the input and writes it to a file, overwriting any content in the file.
This is the safest way to empty the log as other processes can continue to write to the file without releasing their file descriptors, and you can be sure that the underlying data is removed even if the path (/var/log/nginx/domain.acc.log) is just a link to the file

Related

Is there a way to identify the user who owns a process from /proc/PID

I am parsing process details out of /proc/PID and I am so far unable to find who owns a process from that meta directory's files.
Documentation does not seem to point to that info as well:
The owner of the process is the owner of all files in the /proc/PID directory.
$ ls -l /proc/27595
total 0
dr-xr-xr-x 2 me users 0 Jul 14 11:53 attr
-r-------- 1 me users 0 Jul 14 11:53 auxv
...
Also the file /proc/PID/loginuid holds the UID of the owner of the process.
$ cat /proc/27595/loginuid
1000
The owner of the files in /proc/[pid]/ is not always the user -- programs can e.g. make themselves "non-dumpable" to avoid leaking sensitive information if they become another user, and then the file ownership of the files in the directory can change to root.
But normally the UID of the process can be retrieved by an fstat call (or a stat command ) on the /proc/[pid] directory itself.

Can cygwin ls show ACLs without providing the DOS path to file?

The commands
cd c:/p4
ls -ld . c:/p4 /cygdrive/c/p4
shows
d---------+ 1 jgunter Domain Users 0 Apr 27 18:41 .
d---------+ 1 jgunter Domain Users ? 0 Apr 27 18:41 /cygdrive/c/p4
drwxr-xr-x 1 jgunter Domain Users ? 0 Apr 27 18:41 c:/p4
ls shows the perms I want to see only for files specified with a C:/ path.
I know about getfacl, but I'm hoping there's some ls option that will show me what I want without requiring I spell out absolute paths.
I can do something like:
ls -ld `cygpath -da $#`
but when I'm in a deeply nested folder, the output is cluttered by full pathnames.
DOS path makes cygwin treat the file system as not having ACLs. It means that ls shows the correct ACLs, but the same directory is mounted with different options. Therefore ls doesn't have such an option, you need a workaround.
https://cygwin.com/cygwin-ug-net/ov-new1.7.html states at 1.7.2:
Handle native DOS paths always as if mounted with "posix=0,noacl"
Beside this, I think, that d---------+ is strange. I've tried it on my PC, with 1.7.31 cygwin version, and it shows drwx------+, which is a bit better. I had experienced other bugs and strange behaviour in cygwin ACL handling. I guess there is confusion and some hacks about this. chmod 777 was a good workaround in my case.

du -skh * in / returns vastly different size from df on centos 5.5

I have a vps slice running centos 5.5 I am supposed to have 15 gigs of disk space, but according to df it seems to double my disk space usage.
when I run du -skh * in / as root i get:
[root#yardvps1 /]# du -skh *
0 aquota.group
0 aquota.user
5.2M bin
4.0K boot
4.0K dev
4.9M etc
2.5G home
12M lib
14M lib64
4.0K media
4.0K mnt
299M opt
0 proc
692K root
23M sbin
4.0K selinux
4.0K srv
0 sys
48K tmp
2.0G usr
121M var
this is consistent with what I have uploaded to the machine, and adds up to about 5gigs.
BUT when i run df i get:
[root#yardvps1 /]# df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/simfs 15728640 11659048 4069592 75% /
none 262144 4 262140 1% /dev
it is showing me using almost 12 gigs already.
what is causing this discrepancy and is there anything I can do about it, I planned the server out based on 15 gigs but now it is basically only letting me have about 7 gigs of stuff on it.
thanks.
The most common cause of this effect is open files that have been deleted.
The kernel will only free the disk blocks of a deleted file if it is not in use at the time of its deletion. Otherwise that is deferred until the file is closed, or the system is rebooted.
A common Unix-world trick to ensure that no temporary files are left around is the following:
A process creates and opens a temporary file
While still holding the open file descriptor, the process unlinks (i.e. deletes) the file
The process reads and writes to the file normally using the file descriptor
The process closes the file descriptor when it's done, and the kernel frees the space
If the process (or the system) terminates unexpectedly, the temporary file is already deleted and no clean-up is necessary.
As a bonus, deleting the file reduces the chances of naming collisions when creating temporary files and it also provides an additional layer of obscurity over the running processes - for anyone but the root user, that is.
This behaviour ensures that processes don't have to deal with files that are suddenly pulled from under their feet, and also that processes don't have to consult each other in order to delete a file. It is unexpected behaviour for those coming from Windows systems, though, since there you are not normally allowed to delete a file that is in use.
The lsof command, when run as root, will show all open files and it will specifically indicate deleted files that are deleted:
# lsof 2>/dev/null | grep deleted
bootlogd 2024 root 1w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
bootlogd 2024 root 2w REG 9,3 58 917506 /tmp/init.0W2ARi (deleted)
Stopping and restarting the guilty processes, or just rebooting the server should solve this issue.
Deleted files could also be held open by the kernel if, for example, it's a mounted filesystem image. In this case unmounting the filesystem or rebooting the server should do the trick.
In your case, judging by the size of the "missing" space I'd look for any references to the file that you used to set up the VPS e.g. the Centos DVD image that you deleted after installing.
Another case which I've come across although it doesn't appear to be your issue is if you mount a partition "on top" of existing files.
If you do so you effectively hide existing files that exist in the directory on the mounted-to partition (the mount point) from the mounted partition.
To fix: stop any processes with open files on the mounted partition, unmount partition, find and move/remove any files that now appear in mount point directory.
I had the same trouble with FreeBSD server. The reboot helped.

How can I tar a file that is being used by another process?

I'm archiving a directory. This directory has a file that is being written by another process. When I tar this using Linux tar/Perl Tar module, in the archive the entry for the file is there but the contents are null.
Before tarring the files are...
-rw-r--r-- 1 irraju dba 28 Feb 18 02:22 a
-rw-r--r-- 1 irraju dba 25 Feb 18 02:23 b
-rw-r--r-- 1 irraju dba 29 Feb 18 03:38 c
After untarring
-rw-r--r-- irraju/dba 28 2009-02-18 02:22:58 a
-rw-r--r-- irraju/dba 25 2009-02-18 02:23:17 b
-rw-r--r-- irraju/dba 0 2009-02-18 03:33:12 c
How can I fix this problem? I want the file to be in the archive with the contents it has at the instant it is archived. This file can be a log file and assume that we can not close the file handle before tarring.
As you tagged the question with "Linux" there's a chance you're using an LVM partition.
If indeed you're running on an LVM partition, you can use the LVM snapshot feature.
Here's a link to the relevant LVM documentation on how to perform the operation.
Here's a part of the LVM snapshot intro:
A wonderful facility provided by LVM is 'snapshots'. This allows the administrator to create a new block device which presents an exact copy of a logical volume, frozen at some point in time. Typically this would be used when some batch processing, a backup for instance, needs to be performed on the logical volume, but you don't want to halt a live system that is changing the data. When the snapshot device has been finished with the system administrator can just remove the device. This facility does require that the snapshot be made at a time when the data on the logical volume is in a consistent state - the VFS-lock patch for LVM1 makes sure that some filesystems do this automatically when a snapshot is created, and many of the filesystems in the 2.6 kernel do this automatically when a snapshot is created without patching.
Try copying the files first...
cp a a.tmp
cp b b.tmp
cp c c.tmp
...then tarball everything together...
tar *.tmp abc.tar
...and clean up:
rm *.tmp
If that doesn't work then the process holding the file handle doesn't want to share read access...
You may find that this depends on the filesystem used and the application that is accessing the file. The closest to a generic solution is to use a filesystem that supports snapshots and create a snapshot before running tar.
Your second output is made after your first, that can't be right. I'm guessing that tar is right here: when it was doing its job, the file was empty. You may be dealing with a race condition here.
As others have said, it depends on the file system & OS being used. sync first (or whatever the equivalent is on your file system), copy the files to a temp directory and then tar them up. If the file system won't allow you to copy an opened file, then you're SOL; Perl can't get around file system limitations.

core dump files on Linux: how to get info on opened files?

I have a core dump file from a process that has probably a file descriptor leak (it opens files and sockets but apparently sometimes forgets to close some of them). Is there a way to find out which files and sockets the process had opened before crashing? I can't easily reproduce the crash, so analyzing the core file seems to be the only way to get a hint on the bug.
If you have a core file and you have compiled the program with debugging options (-g), you can see where the core was dumped:
$ gcc -g -o something something.c
$ ./something
Segmentation fault (core dumped)
$ gdb something core
You can use this to do some post-morten debugging. A few gdb commands: bt prints the stack, fr jumps to given stack frame (see the output of bt).
Now if you want to see which files are opened at a segmentation fault, just handle the SIGSEGV signal, and in the handler, just dump the contents of the /proc/PID/fd directory (i.e. with system('ls -l /proc/PID/fs') or execv).
With these information at hand you can easily find what caused the crash, which files are opened and if the crash and the file descriptor leak are connected.
Your best bet is to install a signal handler for whatever signal is crashing your program (SIGSEGV, etc.).
Then, in the signal handler, inspect /proc/self/fd, and save the contents to a file. Here is a sample of what you might see:
Anderson cxc # ls -l /proc/8247/fd
total 0
lrwx------ 1 root root 64 Sep 12 06:05 0 -> /dev/pts/0
lrwx------ 1 root root 64 Sep 12 06:05 1 -> /dev/pts/0
lrwx------ 1 root root 64 Sep 12 06:05 10 -> anon_inode:[eventpoll]
lrwx------ 1 root root 64 Sep 12 06:05 11 -> socket:[124061]
lrwx------ 1 root root 64 Sep 12 06:05 12 -> socket:[124063]
lrwx------ 1 root root 64 Sep 12 06:05 13 -> socket:[124064]
lrwx------ 1 root root 64 Sep 12 06:05 14 -> /dev/driver0
lr-x------ 1 root root 64 Sep 12 06:05 16 -> /temp/app/whatever.tar.gz
lr-x------ 1 root root 64 Sep 12 06:05 17 -> /dev/urandom
Then you can return from your signal handler, and you should get a core dump as usual.
One of the ways I jump to this information is just running strings on the core file. For instance, when I was running file on a core recently, due to the length of the folders I would get a truncated arguments list. I knew my run would have opened files from my home directory, so I just ran:
strings core.14930|grep jodie
But this is a case where I had a needle and a haystack.
If the program forgot to close those resources it might be because something like the following happened:
fd = open("/tmp/foo",O_CREAT);
//do stuff
fd = open("/tmp/bar",O_CREAT); //Oops, forgot to close(fd)
now I won't have the file descriptor for foo in memory.
If this didn't happen, you might be able to find the file descriptor number, but then again, that is not very useful because they are continuously changing, by the time you get to debug you won't know which file it actually meant at the time.
I really think you should debug this live, with strace, lsof and friends.
If there is a way to do it from the core dump, I'm eager to know it too :-)
You can try using strace to see the open, socket and close calls the program makes.
Edit: I don't think you can get the information from the core; at most it will have the file descriptors somewhere, but this still doesn't give you the actual file/socket. (Assuming you can distinguish open from closed file descriptors, which I also doubt.)
Recently during my error troubleshooting and analysis , my customer provided me a coredump which got generated in his filesystem and he went out of station in order to quickly scan through the file and read its contents i used the command
strings core.67545 > coredump.txt
and later i was able to open the file in file editor.
A core dump is a copy of the memory the process had access to when crashed. Depending on how the leak is occurring, it might have lost the reference to the handles, so it may prove to be useless.
lsof lists all currently open files in the system, you could check its output to find leaked sockets or files. Yes, you'd need to have the process running. You could run it with a specific username to easily discern which are the open files from the process you are debugging.
I hope somebody else has better information :-)
Another way to find out what files a process has opened - again, only during runtime - is looking into /proc/PID/fd/ , which contains symlinks to open files.

Resources