Linux ssh last - show last logged in past 10 days - linux

How to see last logged via SSH with "last" command?
I mean the last 10 days.
It shows for me only last two days even if I use last -n 1000
Or maybe my logs contain only last two days so how eventually check that and increase this value?

You'll need to check /etc/logrotate.conf Here's the relevant portion of one of my servers.
/var/log/wtmp {
monthly
create 0664 root utmp
minsize 1M
rotate 1
}
if your server is rotating files out and you want to look at what was in the previous month then use the last -f command.
ls /var/log/wtmp*
last -f /var/log/wtmp-20140902 (or whatever the filename is to examine)
log rotation and renaming are distribution dependent. (thanks David C. Rankin)
lastly (no pun intended) you can always do a
man last
and get all the potential command line switches.

The information of who logged in when is available in /var/log/auth.log (or other log files on other distributions). There are multiple log monitoring programs that can extract the information you configure as relevant. On any sane system, every user authentication is logged.
If the accounting subsystem is up and running, then lastcomm shows information about finished processes.

Related

How to recover a very old command from history?

I have used a command for some calculations on 3rd March 2021 within the time range 15:37:00 (input file creation time) to 16:17:00 (output file generating time).
Unfortunately, I lost the command (from writing) and can not remember now.
Is there any way to get it from history? As history only give last 1000 command which is not getting to that time period.
If anyone can help me here will be very beneficial.
Thank you in advance.
What do you mean by losing the command? Do you mean that you deleted the commands history? It seems like you are hitting the hard limit set in your env? try to increase it.
echo $HISTSIZE
I mean, I cannot remember the command by myself and can not find any copy of it written anywhere. I have not deleted anything by myself. The machine might have default settings of keeping history for users (I do not know exactly).
As a long time (almost 6 months) has been passed, I can not see it in the command history.
$ echo $HISTSIZE
1000
$ history
results in the last 1000 commands I have used.
Then, I have tried,
$HISTSIZE=15000
$ echo $HISTSIZE
15000
$ history
still results in ~1000 commands from history.
Is it possible to get the list of commands I have used on 3rd March 2021?

Logging VMStat data to file

I am trying to create some capacity planning reports and one of the requrements is to have info on Memory usage for a few Unix Servers.
Now my knowledge of Unix is very low. I usually just log on and run a few scripts.
But for this report I need to gather VMStat data and produce reports based on previous the previous weeks data broken down by hour which is an average of Vmstat data taken every 10 seconds.
So first question: is VMStat logging on by default and if so what location on the server is the data output to?
If not how can I set this up?
Thanks
vmstat is a command that you run.
To generate one week of Virtual Memory stats spaced out at ten second intervals (less the last one) is 60,479 10 second intervals
So the command you want is:
nohup vmstat 10 604879 > myvmstatfile.dat &
This will make a very big file myvmstatfile.dat
EDIT: RobKielty (The & will put this job in the background, the nohup will prevent the task from hanging up when you logout of the command shell. If you ran this command it would be prudent to monitor the disk partition to which this file was being written to. Use df -h /path/to/directory/where/outputfile/resides to monitor the disk space usage.)
I have no idea what you need to do with the data, so I can't help you there.
Create a crontab entry (crontab -e) like this
0 0 * * 0 /path/to/my/vmstat_script.sh
The file vmstat_script.sh will contain the follow bash script commands.
#!/bin/bash
# vmstat_script.sh
vmstat 10 604879 > myvmstatfile.dat
mv myvmstatfile.dat myvmstatfile.dat.`date +%Y-%m-%d`
This will create one file per week with a name like myvmstatfile.dat.2012-07-01
The command I use for monitoring the Linux vm metrics is below:
nohup vmstat 10 720| (while read; do echo "$(date +%d-%m-%Y" "%H:%M:%S) $REPLY"; done) >> nameofLogfile.log
Here nohup is used for running the process in background.
It will run for 2 hours with interval of 10 secs.
This is the best command for generating graphs and reports as timestamp will also be included in logs along with different metrics, so that we can filter the logs accordingly.

Cygwin top command - See processes for all users

Does anybody know how to see the processes for all users using top command in Cygwin (part of procps library under System).
I know this can be done in *nix but I am struggling in Cygwin. I have tried using pslist but it does not behave in a putty SSH console.
I need to have a solution where I can see a top like dialog using SSH. I do not have any NTLM SSO access to the Win2k3 guest at all so ssh is the only way in.
top only displays Cygwin processes. ps -W will list Windows processes as well.
Manytimes the command "tasklist" gets the job done more effectively. It built into windows, just make sure your System32 folder is part of your bash profile PATH. There is also procps itself. You should also try using mintty for your terminal. You could always try attaching any of these task apps to screen, and or using watch to poll the information.
It seems you can do something like:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1
The User and Kernel mode times there seem to be expressed in 1/10,000,000th of second.
You should be able to post-process that output to get the CPU-usage per second.
Here using cygwin's perl:
wmic process get ProcessId,Name,UserModeTime,KernelModeTime /EVERY:1 |
perl -lne '
if (/\S/) {
my ($k,$c,$p,$u) = split /\s{2,}/;
$n{"$p\t$c"}=$k+$u;
} else {
my %c;
for my $k (keys %n) {
$c{$k} = $n{$k} - $o{$k} if defined $o{$k}
}
print "$_\t" . $c{$_}/1e5 for (sort {$c{$b}<=>$c{$a}} keys %c)[0..20];
%o = %n; %n = undef; print ""
}'
Outputs something like:
0 System Idle Process 588.12377
2196 sh.exe 107.00075
248 svchost.exe 85.80055
7140 explorer.exe 26.52017
[...]
every second.
Note that if the System Idle Process shows just under 800% on an idle system, that's because your system has 8 CPU cores (well at least 8 threads) as that counts the CPU time of all CPUs.
Also note that the EVERY:1 above is a lie. wmic doesn't seem to give that output every second. More likely, it sleeps roughly 1 second between each report and doesn't compensate for the time it takes to compute the report. So in practice, it will run every 1 second and a bit which means those percentages are not very accurate and slightly overestimated.

Linux display average CPU load for last week

On a Linux box, I need to display the average CPU utilisation per hour for the last week. Is that information logged somewhere? Or do I need to write a script that wakes up every 15 minutes to copy /proc/loadavg to a logfile?
EDIT: I'm not allowed to use any tools other than those that come with Linux.
You might want to check out sar (man page), it fits your use case nicely.
System Activity Reporter (SAR) - capture important system performance metrics at
periodic intervals.
Example from IBM Developer Works Article:
Add an entry to your root crontab
# Collect measurements at 10-minute intervals
0,10,20,30,40,50 * * * * /usr/lib/sa/sa1
# Create daily reports and purge old files
0 0 * * * /usr/lib/sa/sa2 -A
Then you can simply query this information using a sar command (display all of today's info):
root ~ # sar -A
Or just for a certain days log file:
root ~ # sar -f /var/log/sa/sa16
You can usually find it in the sysstat package for your linux distro
As far as I know it's not stored anywhere... It's a trivial thing to write, anyway. Just add something like
cat /proc/loadavg >> /var/log/loads
to your crontab.
Note that there are monitoring tools (like Munin) which can do this kind of thing for you, and generate pretty graphs of it to boot... they might be overkill for your situation though.
I would recommend looking at Multi Router Traffic Grapher (MRTG).
Using snmpd to read the load average, it will automatically calculate averages at any time interval and length, along with nice charts for analysis.
Someone has already posted a CPU usage example.

linux uptime history

How can I get a history of uptimes for my debian box? After a reboot, I dont see an option for the uptime command to print a history of uptimes. If it matters, I would like to use these uptimes for graphing a page in php to show my webservers uptime lengths between boots.
Update:
Not sure if it is based on a length of time or if last gets reset on reboot but I only get the most recent boot timestamp with the last command. last -x also does not return any further info. Sounds like a script is my best bet.
Update:
Uptimed is the information I am looking for, not sure how to grep that info in code. Managing my own script for a db sounds like the best fit for an application.
Install uptimed. It does exactly what you want.
Edit:
You can apparantly include it in a PHP page as easily as this:
<? system("/usr/local/bin/uprecords -a -B"); ?>
Examples
the last command will give you the reboot times of the system. You could take the difference between each successive reboot and that should give the uptime of the machine.
update
1800 INFORMATION answer is a better solution.
You could create a simple script which runs uptime and dumps it to a file.
uptime >> uptime.log
Then set up a cron job for it.
Try this out:
last | grep reboot
according to last manual page:
The pseudo user reboot logs in each time the system is rebooted.
Thus last reboot will show a log of all reboots since the log file
was created.
so last column of #last reboot command gives you uptime history:
#last reboot
reboot system boot **************** Sat Sep 21 03:31 - 08:27 (1+04:56)
reboot system boot **************** Wed Aug 7 07:08 - 08:27 (46+01:19)
This isn't stored between boots, but The Uptimes Project is a third-party option to track it, with software for a range of platforms.
Another tool available on Debian is uptimed which tracks uptimes between boots.
I would create a cron job to run at the required resolution (say 10 minutes) by entering the following [on one single line - I've just separated it for formatting purposes] in your crontab (cron -l to list, cron -e to edit).
0,10,20,30,40,50 * * * *
/bin/echo $(/bin/date +\%Y-\%m-\%d) $(/usr/bin/uptime)
>>/tmp/uptime.hist 2>&1
This appends the date, time and uptime to the uptime.hist file every ten minutes while the machine is running. You can then examine this file manually to figure out the information or write a script to process it as you see fit.
Whenever the uptime reduces, there's been a reboot since the previous record. When there are large gaps between lines (i.e., more than the expected ten minutes), the machine's been down during that time.
This information is not normally saved. However, you can sign up for an online service that will do this for you. You just install a client that will send your uptime to the server every 5 minutes and the site will present you with a graph of your uptimes:
http://uptimes-project.org/
i dont think this information is saved between reboots.
if shutting down properly you could run a command on shutdown that saves the uptime, that way you could read it back after booting back up.
Or you can use tuptime https://sourceforge.net/projects/tuptime/ for a total uptime time.
You can use tuptime, a simple command for report the total uptime in linux keeping it betwwen reboots.
http://sourceforge.net/projects/tuptime/
Since I haven't found an answer here that would help retroactively, maybe this will help someone.
kern.log (depending on your distribution) should log a timestamp.
It will be something like:
2019-01-28T06:25:25.459477+00:00 someserver kernel: [44114473.614361] somemessage
"44114473.614361" represents seconds since last boot, from that you can calculate the uptime without having to install anything.
Nagios can make even very beautiful diagrams about this.
Use Syslog
For anyone coming here searching for their past uptime.
The solution of #1800_Information is a good advise for the future, but I needed to find information for my past uptimes on a specific date.
Therefore I used syslog to determine when that day the system was started (first log entry of that day) and when the system was shutdown again.
Boot time
To get the system start time grep for the month and day and show only the first lines:
sudo grep "May 28" /var/log/syslog* | head
Shutdown time
To get the system shutdown time grep for the month and day and show only the last few lines:
sudo grep "May 28" /var/log/syslog* | tail

Resources