I frequently use jstat to obtain GC related statistics. However, there are times when I simply cannot obtain any of the statistics from a JVM. It just says that it cannot find the process id, even though its the correct id.
Digging slightly deeper, it seems to be related to the "/tmp/hsperfdata" related files. On servers that have this directory/files, jstat works as expected (jps also shows up the correct instances). On the servers/instances on which it fails, I dont see the "/tmp/hsperfdata" directory (and jps does not report the pid).
Has anyone else run into this ? I read somewhere that the TMP variable needs to be set correctly, but I dont see any difference in environment settings between the server account where it works and where it does not.
Maybe this helps: http://bugs.sun.com/view_bug.do?bug_id=7009828
Related
I need to get list of all modified files on my linux machines (AIX, Solaris, Red Hat, CentOS, HP-UX) in a specific time range (similar to proc mon or forfiles in Windows)
I tried to use find command. But since it didn't search per specific PID I got too many results.
I wanted to narrow down the results by looking for files that were modified by specific process. I used the lsof command for specific PID. but I got list of files that were accessed, which wasn't helpful for me, because I could not know if the process changed them.
I tried the strace command for specific PID, but the output was to hard to work with (too much irrelevant info, and I need it for 24 hours time range)
I kind of got to a dead end. Any ideas?
(In short - I want to get list of all modified files by a specific process in a specific time range)
Linux does not maintain a log of a record, of any kind, of which files were modified by which process.
The only logged information is each file's last modification timestamp. And even that can be arbitrarily adjusted by any process, which has appropriate privileges, to be ten years in the future, for example.
The short answer is that the information you're looking for does not exist.
The closest what I know of for your usecase is SELinux. This will only work if SELinux is enabled on your Operating System.
SELinux is capable of logging a bunch of information along with uid, gid, and PIDs ( exactly what you need ) for different operations.
For more details look at:
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Security_Guide/sec-Understanding_Audit_Log_Files.html
I see that inspecting /proc/self/maps on Linux machines lets me see the pages that have been mapped in. As a result I can write a program to read and parse the pages it has mapped in.
How could one go about doing something similar for Windows? Are there any APIs for the same? If not, do you have any suggestions on how this could be done?
Yes, the possibility exists. First of all You have to access any process memory, or better, make it "accessible". Then You can read memory.
Here are some usefull links ( by the way, You should always look in there, if You come from linux and try to do things on windows, it is the main source ).
https://msdn.microsoft.com/en-us/library/windows/desktop/ms680553%28v=vs.85%29.aspx
https://msdn.microsoft.com/en-us/library/windows/desktop/aa366916%28v=vs.85%29.aspx
There is all documented.
But ther are also undocumented approaches, really crazy stuff, which also deals whith this topic.Like this for example.
http://undocumented.ntinternals.net/UserMode/Undocumented%20Functions/Memory%20Management/Virtual%20Memory/NtReadVirtualMemory.html
So I was looking into why a program was getting rid of my background, and the author of the program said to post .xsession-errors and many people did. Then my next question was: What is .xsession-errors? A google search reveals many results but nothing explaining what it is.
What I know so far:
It's some kind of error log. I can't figure out what it's related too (ubuntu itself? programs?)
I have one and it seems like all Ubuntu systems have it, though I cannot verify.
Linux graphical interfaces (such as GNOME) provide a way to run applications by clicking on icons instead of running them manually on the command-line. However, by doing so, output from the command-line is lost - especially the error output (STDERR).
To deal with this, some display managers (such as GDM) pipe the error output to ~/.xsession-errors, which can then be used for debugging purposes. Note that since all applications launched this way dump to the same log, it can get quite large and difficult to find specific messages.
Update: Per the documentation:
The ~/.xsession-errors X session log file has been deprecated and is
no longer used.
It has been replaced by the systemd journal (journalctl command).
It's the error log produced by your X windows system (which the Ubuntu GUI is built on top of).
Basically it's quite a low level error log for X11.
I am having an issue where Apache logs are growing out of proportion on several servers (Linux CentOS 5)... I will eventually disable logging completely but for now I need a quick fix to reclaim the hard disk space.
I have tried using the echo " " > /path/to/log.log or the * > /path/to/log.log but they take too long and almost crash the server as the logs are as large as 100GB
Deleting the files works fast but my question is, will it cause a problem when I restart apache. My servers are live and full of users so I can't crash them.
Your help is appreciated.
Use the truncate command
truncate -s 0 /path/to/log.log
In the longer term you should use logrotate to keep the logs from getting out of hand.
Try this:
cat /dev/null > /path/to/log.log
mv /path/to/log.log /path/to/log.log.1
Do this for your access, error and if you are really doing it on prod, you rewrite logs.
This doesn't effect Apache on *nix, since the file is open. Then restart Apache. Yes, I know I said restart, but this usually takes a second or so, so I doubt that anyone will notice -- or blame it on the network. The restarted Apache will be running with a new set of log files.
In terms of your current logs, IMO you need to keep at least the last 3 months error logs, and 1 month access logs, but look at your volumetrics to decide your rough per week volumes for error and access logs. Don't truncate the old files. If necessary do a nice tail piped to gzip -c of these to archives. If you want to split the use a loop doing a tail|head|gzip using the --bytes=nnG option. OK, you'll split across the odd line but that's better than deleting the lot as you suggest.
Of course, you could just delete the lot as you and others propose, but what are you going to do if you've realised that the site has been hacked recently? "Sorry: too late; I've deleted the evidence!"
Then for goodness sake implement a logrotate regime.
I need to write a script to check the disk every minute and report if it is failing by any reason. The error could be the absolute disk failure and a bad sector and so on .
First, I wonder if there is any script out there that does the same as it should be a standard procedure (because I really do not want to reinvent the wheel).
Second, I wonder if I want to look for errors in /var/log/messages, is there any list of standard error strings for disks that I can use?
I look for that on the net a lot, there are lots of info and at the same time no info about that.
Any help will be much appreciated.
Thanks,
You could simply parse the output of dmesg which usually reports fairly detailed information about drive errors, well that's how I've collected stats on failing drives before.
You might get better more well documented information by using Parse::Syslog or lower level kernel reporting directly though.
Logwatch does the /var/log/messages part of the ordeal (as well as any other logfiles that you choose to add). You can either choose to use that, or to use its code to roll your own sollution (it's all written in perl).
If your harddrives support SMART, i suggest you use smartctl output for diagnostics as it includes a lot of nice info that can be monitored over time to detect failure.