Windows Script to Find String in Log File - string

I have IBM Cognos TM1 application running as Services on Windows Server 2008. When I start the Service, TM1 will write a log file named "tm1server.log" on "D:\TM1\log\". TM1 will continously write this log until the service is ready, which normally takes 3 hours until the service is ready. When the service is ready, TM1 will write "TM1 Server is ready" on the log.
I want to make a script that continously check the log file until the string "TM1 Server is ready" is written. When the string found, I want the script to run another script that will send email to me. I have made the script for sending email.
Can anybody help me?
Thanks and regards,
Kris
--edit--
i use findstr command to search the string:
findstr /d:d:\TM1\log\ "TM1 Server is ready" "D:\TM1\log\tm1server.log" >> result.log
but the result.log contains all of the contents of tm1server.log.

On my server, 'TM1 server is ready' is written each time the server starts. So I have many 'TM1 server is ready's in one file.
You could test the last returned 'TM1 server is ready' to see when it happened to see if it happened in the last 5 (or whatever) hours with something like:
PS E:\TM1_Server_Logs> (Get-Date(select-string .\tm1server.log -pattern '(?<timestamp>\d{4}-\d{2}-\d{2}\s\d{2}:\d{2}:\d{2}).*TM1 server is ready' | select -expand Matches | foreach {$_.groups["timestamp"].value} | Get-Date -format "yyyy-MM-dd HH:mm:ss" | Select-Object -last 1)).AddHours(-5) -ge (Get-Date).AddHours(-5)
That's a long one-liner, but it should work. If it returns true you could decide what to to next.

You can also try
findstr /C:"your token to find" "C:\targetFile.log"

Related

How to send real output without headers to file in PowerShell

I have a PowerShell script where I use Add-Content to send $output to log.txt. $output is generated as Test-Connection nrk.no -count 1 (nrk.no is a random website I trust will remain up most of the time). I now get a long string: \\NOTREALNAME\root\cimv2:Win32_PingStatus.Address="nrk.no",BufferSize=32,NoFragmentation=false,RecordRoute=0,ResolveAddressNames=false,SourceRoute="",SourceRouteType=0,Timeout=4000,TimestampRoute=0,TimeToLive=80,TypeofService=0 . How can I get the long one with hyphens and headers (see below), but remove the header part so that I log the user friendly part of the output but do not get header and then info over and over again in my log file?
"The long one with hyphens and headers":
Pipe the output to Format-Table -HideTableHeaders to get table formatted output without the header:
Test-Connection nrk.no |Format-Table -HideTableHeaders |Out-String -Stream |Add-Content log.txt

Need to parse thousands of files for thousands of results - prefer powershell

I am getting consistently pinged from our government contract holder to search for IP addresses in our logs. I have three firewalls, 30 plus servers, etc so you can imagine how unwieldy it becomes. To amplify the problem, I have been provided a list of over 1500 IP addresses for which I am to search all log files...
I have all of the logs downloaded and can use powershell to go through them one by one but it takes forever. I need to be able to run the search using multi-thread in Powershell but cannot figure out the logic to do so. Here's my one by one script...
Any help would be appreciated!
$log = (import-csv C:\temp\FWLogs\IPSearch.csv)
$ip = ($log.IP)
ForEach($log in $log){ Get-ChildItem -Recurse -path C:\temp\FWLogs -filter *.log | Select-String $ip -List | Select Path
}

How to read continuous log file from last read line | Linux Shell

Platform: RHEL7
Situation:
A JMeter report file is being appended with new results every 5 minutes by crontab script
Another awk script looks for response time greater than 500ms and sends email alerts
Problem Statement:
The requirement is to scan only newly added lines in the report file.
Presently, the awk script is reading the complete report every time
and sends alerts even for older events. awk -F "," '$4 != 200 || $14>
500' results.jtl
Good-to-Have if the awk script can read from the end of the file up to line read last time. This shall help in creating an alert for the latest event first.
Any suggestion shall be a great help.
Any reason for not using:
Duration Assertion: for failing samples which response times are over 500 ms
If Controller: with condition ${JMeterThread.last_sample_ok} which checks whether last sampler is successful or not
SMTP Request Sampler: to send an email when there is a failure

Need to capture the commands fired on Linux

I would like to capture all the commands fired by a user in a session. This is needed for the purpose of auditing.
I used some thing like below,
LoggedIn=`date +"%B-%d-%Y-%M:%H"`
HostName=`hostname`
UNIX_USER=`who am i | cut -d " " -f 1`
echo " Please enter a Change Request Number for which you are looging in : "
read CR_NUMBER
FileName=$HostName-$LoggedIn-$CR_NUMBER-$UNIX_USER
script $FileName
I have put this snippet in .profile file, so that as soon as the user logs in to a SU account this creates the file. The plan is to push this file to a central repository where an auditor can look into those files.
But there are couple of problems in this.
The "script" command spools all the data from the session, for example say, a user cats a property file, It appends all the data of the property file to the auditing file.
Unless user fires the 'exit' command, the data will not be spooled to auditing file, by any chance if user logs out with out firing exit command, the auditing file will be empty.
Is there any better solution for auditing ? History file is not an option since it does not tell me for which Change Request number ( internal to my organisation) the commands are fired. Is there any other way just capture only the commands fired but not the output ?
Some of the previous discussion are here and here
I think this software exactly matches your need:
https://github.com/a2o/snoopy

How to search in many servers their logs and sort the information?

The idea is very simple:
I would like to pass some word as something as argument to some script, then this scripts d would search in all my servers into their logs, when found something relevant, they would throw this information in some file which this one would be rsync to some server which would sort the whole information of all servers and presents to me where and when something has been passed.
I think this is possible because my servers are syncronized with NTP which grants me they won't have the exact same time in two or more servers.
But I wonder if this is a good idea and how do this search and sort these logs ?
The problem for me is:
1) How do I access my servers to run this search in each one of them ?
2) How do I make this search ?
3) How do I sort this whole information in the final log (contained the whole information of all servers) ?
You could add your ssh keys to each server and then from your main server add this to your bashrc
export web_servers=(server1 server2 server3 server4 )
function grepallservers() {
for s in ${web_servers[#]}; do echo $s; ssh $s grep "$#"; done
}
function all-serv-grep() {
grepallservers $1 /var/log/error.log
}

Resources