Append all logs to /var/log - linux

Application scenario:
I have the (normal/permanent) /var/log mounted on an encrypted partition (/dev/LVG/log). /dev/LVG/log is not accessible at boot time, it needs to be manually activated later by su from ssh.
A RAM drive (using tmpfs) is mounted to /var/log at init time (in rc.local).
Once /dev/LVG/log is activated, I need a good way of appending everything in the tmpfs to /dev/LVG/log, before mounting it as /var/log.
Any recommendations on what would be a good way of doing so? Thanks in advance!

The only thing you can do is block until you somehow verify that /var/log is mounted on an encrypted VG, or queue log entries until that happened if your app must start on boot, which could get kind of expensive. You can't be responsible for every other app on the system and I can't see any reason to encrypt boot logs.
Then again, if you know the machine has heap to spare, a log queue that flushed once some event said it was OK to write to disk would seem sensible. That's no more expensive than the history that most shells keep, as long as you take care to avoid floods of events that could fill up the queue.
This does not account for possible log loss, but could with a little imagination.

There is a risk you could lose logging. You might want to try and write your logs to a file in /tmp which is tmpfs and thus in memory. You could then append the content to your encrypted volume and then remove the file in tmp. Of course if your machine failed to boot and went down again tmp would be erased and so you'd lose a good way of working out why.

Related

Unable to increase disk size on file system

I'm currently trying to log in to one of the instances created on google cloud, but found myself unable to do so. Somehow the machine escaped my attention and the hard disk got completely full. Of course I wanted to free some disk space and make sure the server running could restart, but I am facing some issues.
First off, I have found the guide on increasing the size of the persistent disk (https://cloud.google.com/compute/docs/disks/add-persistent-disk). I followed that and already set it 50 GB which should be fine for now.
However, on file system level because my disk is full I cannot make any SSH connection. The error is simply a timeout caused by the fact that there is absolutely no space for the SSH deamon to write to its log. Without any form of connection I cannot free some disk space and/or run the "resize2fs" command.
Furthermore, I already tried different approaches.
I seem to not be able to change the boot disk to something else.
I created a snapshot and tried to increase the disk size on the new
instance I created from that snapshot, but it has the same problem
(filesystem is stuck at 15GB).
I am not allowed to mount the disk as an additional disk in another
instance.
Currently I'm pretty much out of ideas. The important data on the disk was back-upped but I'd rather have the settings working as well. Does anyone have any clues as where to start?
[EDIT]
Currently still trying out new things. I have also tried to run shutdown- and startup scripts that remove /opt/* in order to free some temporary space but the script either don't run or provide some error I cannot catch. It's pretty frustrating working nearly blind I must say.
The next step for me would be to try and get the snapshot locally. It should be doable using the bucket but I will let you know.
[EDIT2]
Getting a snapshot locally is not an option either or so it seems. Images from the google cloud instances can only be created or deleted, but not downloaded.
I'm now out of ideas.
So I finally found the answer. These steps were taken:
In the GUI I increased the size of the disk to 50 GB.
In the GUI I detached the drive by deleting the machine whilst
ensuring that I did not throw away the original disk.
In the GUI I created a new machine with a sufficiently big harddisk.
On the command line (important!!) I attached the disk to the newly
created machine (the GUI option has a bug still ...)
After that I could mount the disk as a secondary disk and perform all the operations I needed.
Keep in mind: By default google cloud solutions do NOT use logical volume management, so pvresize/lvresize/etc. is not installed and resize2fs might not work out of the box.

Can NFS soft mounts cause silent corruption even after a successful "close" operation

Almost everything I've read says that NFS soft mounts can cause silent corruption. I assume this is because of the following scenario:
user application writes to NFS
NFS client accepts the write request and returns success to the user app
NFS client has data queue/buffered waiting to be written to the NFS server
Some problem prevents the queue/buffered data from being written (eg. NFS server goes down)
My question is, what happens with this scenario with NFS soft mounts:
Same steps as above, but in addition...
The user app continues to write more data on the same file handle
The user app closes the file
Using soft mounts, will NFS flush it's cache for the just closed file? And, if unable to do that, (because the NFS soft mount gives up because of errors), shouldn't the user app get an error back on the close command?
Ie., I'm wondering if a successful close on a soft mount NFS file guarantees that there was no silent corruption.
Later edit:
Looking at http://www.avidandrew.com/understanding-nfs-caching.html, it says,
In NFSv3, the close() will cause the client to flush all data to stable storage. The client will also flush data to stable storage on a chmod, since that could potentially affect its ability to write back the data. It will not bother to do so for rename. An application should normally be able to rely on the data being safely on disk in both these situations provided that the server honours the NFS protocol (with a caveat that an ill-timed 'kill -9' could interrupt the process of flushing).
But then it also says that a NFS "commit" is ignored if the NFS volume was mounted with the async option (the default, as far as I can tell), so maybe this only applies if the NFS volume is explicitly mounted with the sync option? But the NFS man page says the sync option doesn't do caching, which contradicts this. Oh well.
The Linux NFS FAQ states that
A8. What is close-to-open cache consistency?
[...]
When the application closes the file, the NFS client writes back any pending changes to the file so that the next opener can view the changes. This also gives the NFS client an opportunity to report any server write errors to the application via the return code from close(). This behavior is referred to as close-to-open cache consistency.
I (with no proof) do not expect the fclose() causes any explicit flushing, nor to block while any flushing occurs. You've simply relinquished the file handle to the local kernel.
https://serverfault.com/questions/9499/what-are-the-advantages-disadvantages-of-hard-versus-soft-mounts-in-unix

Forensic analysis - process log

I am performing Forensic analysis on Host based evidence - examining partitions of a hard drive of a server.
I am interested in finding the processes all the "users" ran before the system died/rebooted.
As this isn't live analysis I can't use ps or top to see the running processes.
So, I was wondering if there is a log like /var/log/messages that shows me what processes users ran.
I have gone through a lot of logs in /var/log/* - they give me information about logins, package updates, authorization - but nothing about the processes.
If there was no "command accounting" enabled, there is no.
Chances to find something are not too big, anyway a few things to consider:
depends how gracefull death/reboot was (if processes were killed gracefully, .bash_history and similar files may be updated with recent session info)
utmp and wtmp files may give the list of active users at the reboot.
OS may be saving crash dump (depends on linux distribution). If so - You may be able to examine OS state at the moment of crash. See RedHat's crash for details (http://people.redhat.com/anderson/crash_whitepaper/).
/tmp, /var/tmp may hold some clues, what was running
any files with mtime and ctime timestamps (maybe atime also) near the time of crash
maybe You can get something usefull from swap partition (especially if reboot was related to heavy RAM usage).
So, I was wondering if there is a log like /var/log/messages that
shows me what processes users ran
Given the OS specified by the file system path of /var/log, I am assuming you are using ubuntu or some linux based server and if you are not doing live forensics while the box was running or memory forensics (where a memory capture was grabbed), AND you rebooted the system, there is no file within /var/log that will attribute processes to users. However, if the user was using the bash shell, then you could check the .bash_history file that shows the commands that were run by that user which I think is 500 (by default for the bash shell).
Alternatively, if a memory dump was made (/dev/mem or /dev/kmem), then you could used volatility to pull out processes that were run on the box. But still, I do not think you could attribute the processes to the users that ran them. You would need additional output from volatility for that link to be made.

Linux disc partition and Nginx

in the linux bible book, i've found that it will be useful to install linux on different partitions; for example separating /var will be benefinc to avoid that an attacker will fill the hard drive and stops the OS (since the page will be in (/var/www/), and letting the application which is in /usr running, (nginx for example) how can we do this?
am sorry for that question, because am new in linux system, when i've tried the first time to load another partition (the d: in windows), it asked me to mount it first (i've made a shortcut to a document in the d: and the shortcut dont work untill i mount the partition), so does it make sense to make 5 partitions (/boot, /usr, /var, /home, /tmp) to load the OS?
do the web hosters make the same strategy?
even you divide the partitions.
Attacker can fill the logs and make the web service unstable. Which mostly or defaultly located in /var/log folder. Some distros even log folder in /etc/webserver/log folder.
there are some uploading related flaws that made php upload features fill up the file limit on tmp folder.
This will not protect you at all. You must look the security from another perspective.

Linux: will file reads from CIFS be cached in memory?

I am writing a streaming server for linux that reads files from CIFS mounts and sends
them over a socket. Ideally, linux will cache the file in memory so that subsequent
reads will be faster. Is this the case? Can I tell the kernel to cache
network reads ?
Edit: there will be multiple reads, but no writes, on these files.
Thanks!
Update: I've tested this on a CIFS volume, using fadvise POSIX_FADV_WILLNEED to cache the file locally (using linux-ftools on command line). Turns out that the volume needs to be mounted in read-write mode for this to work. In read only mode, the fadvise seems to be ignored. This must have something to do with the samba oplock mechanism.
Subject to the usual cache coherency rules [1] in CIFS, yes, the kernel CIFS client will cache file data.
[1] Roughly, CIFS is uncached in principle, but by taking oplocks the client can cache data more aggressively. For an explanation of CIFS locking, see e.g. the Samba manual at http://www.samba.org/samba/docs/man/Samba-HOWTO-Collection/locking.html . If the client(s) open the files in read only mode, then I suspect the client will use level 2 oplocks, and as there's no conflicting access takes place, multiple clients should be able to have level 2 oplocks for the same files. Only when some client requests write access to the files, will the oplocks be broken.

Resources