How to change segments of lines in a file based on regex Linux - linux

I have a file where each line begins with a specific logging info. Here is an example:
12-May 02:01:18:INFO:root:restapid=>someurlhere
12-May 02:01:19:INFO:root:response=>loremipsum
I want to catch these time info and replace it with the current date&time. I'am able to get the target part by using egrep but unfortunately I couldn't find a way to change it (It is probably because I'm not familiar with sed). How can I do that ? My egrep solution is the following:
egrep '^[0-9]+-[a-Z]+ [0-9]+:[0-9]+:[0-9]+'weblog.api
If I can manage this part, I want to assign this command to a function(or alias) in my bashrc and when I run it I want to change all the time info with current time by calling something like
sample_alias weblog.api
The desired output format is the following (Let's say right now time is 04 Feb 05:02:03. I believe I can get the time info by date "+%y%m%d%H%M" command)
05-Feb 05:02:03:INFO:root:restapid=>someurlhere
05-Feb 05:02:03:INFO:root:response=>loremipsum

cat v1
12-May 02:01:18:INFO:root:restapid=>someurlhere
12-May 02:01:19:INFO:root:response=>loremipsum
sed 's/[0-9]*-[a-Z]* [0-9]*:[0-9]*:[0-9]*/'"$(date +"%d-%b %H:%M:%S")"'/g' v1
05-Feb 06:21:30:INFO:root:restapid=>someurlhere
05-Feb 06:21:30:INFO:root:response=>loremipsum

Related

What does this SED command do and how can I modify it for my use case?

I have been asked to fix someone else code so im unsure how the command actually works as ive never had to work with regex type code.
sed -r 's/([0-9]{2})\/([0-9]{2})\/([0-9]{4})\s([0-9]{2}:[0-9]{2}:[0-9]{2})/\3\/\1\/\2 \4/g'
This code reads the below txt file and is 'meant' to display the number in bold below.
placeholder_name 01/01/2022 12:00:00 01/01/2022 12:00:01 STATUS 12345/15 50
This is output to a new temp file but the issue is that only the first character in the number after the '/' is displayed, i.e. for the above example only 1 is displayed.
How would I modify the above command to take the full number after the '/'. Alternatively, if there is a nicer/better way to do this id be happy to hear it
Note: The number in bold has a range of 1-99
Using sed
$ sed -E 's#.*/([[:digit:]]+).*#\1#' input_file
15

Is it possible to display a file's contents and delete that file in the same command?

I'm trying to display the output of an AWS lambda that is being captured in a temporary text file, and I want to remove that file as I display its contents. Right now I'm doing:
... && cat output.json && rm output.json
Is there a clever way to combine those last two commands into one command? My goal is to make the full combined command string as short as possible.
For cases where
it is possible to control the name of the temporary text file.
If file is not used by other code
Possible to pass "/dev/stdout" as the.name of the output
Regarding portability: see stack exchange how portable ... /dev/stdout
POSIX 7 says they are extensions.
Base Definitions,
Section 2.1.1 Requirements:
The system may provide non-standard extensions. These are features not required by POSIX.1-2008 and may include, but are not limited to:
[...]
• Additional character special files with special properties (for example,  /dev/stdin, /dev/stdout,  and  /dev/stderr)
Using the mandatory supported /dev/tty will force output into “current” terminal, making it impossible to pipe the output of the whole command into different program (or log file), or to use the program when there is no connected terminals (cron job, or other automation tools)
No, you cannot easily remove the lines of a file while displaying them. It would be highly inefficient as it would require removing characters from the beginning of a file each time you read a line. Current filesystems are pretty good at truncating lines at the end of a file, but not at the beginning.
A simple but extremely slow method would look like this:
while [ -s output.json ]
do
head -1 output.json
sed -i 1d output.json
done
While this algorithm is plain and simple, you should know that each time you remove the first line with sed -i 1d it will copy the whole content of the file but the first line into a temporary file, resulting in approximately 0.5*n² lines written in total (where n is the number of lines in your file).
In theory you could avoid this by do something like that:
while [ -s output.json ]
do
line=$(head -1 output.json)
printf -- '%s\n' "$line"
fallocate -c -o 0 -l $((${#len}+1)) output.json
done
But this does not account for variable newline characters (namely DOS-formatted newlines) and fallocate does not always work on xfs, among other issues.
Since you are trying to consume a file alongside its creation without leaving a trace of its existence on disk, you are essentially asking for a pipe functionality. In my opinion you should look into how your output.json file is produced and hopefully you can pipe it to a script of your own.

How do I use Nagios to monitor a log file that generates a random ID

This the log file that I want to monitor:
/test/James-2018-11-16_15215125111115-16.15.41.111-appserver0.log
I want Nagios to read it this log file so I can monitor a specific string.
The issue is with 15215125111115 this is the random id that gets generated
Here is my script where the Nagios is checking for the Logfile path:
Veriables:
HOSTNAMEIP=$(/bin/hostname -i)
DATE=$(date +%F)
..
CHECK=$(/usr/lib64/nagios/plugins/check_logfiles/check_logfiles
--tag='failorder' --logfile=/test/james-${date +"%F"}_-${HOSTNAMEIP}-appserver0.log
....
I am getting the following output in nagios:
could not find logfile /test/James-2018-11-16_-16.15.41.111-appserver0.log
15215125111115 This number is always generated randomly but I don't know how to get nagios to identify it. Is there a way to add a variable for this or something? I tried adding an asterisk "*" but that didn't work.
Any ideas would be much appreciated.
--tag failorder --type rotating::uniform --logfile /test/dummy \
--rotation "james-${date +"%F"}_\d+-${HOSTNAMEIP}-appserver0.log"
If you add a "-v" you can see what happens inside. Type rotating::uniform tells check_logfiles that the rotation scheme makes no difference between current log and rotated archives regarding the filename. (You frequently find something like xyz..log). What check_logfile does is to look into the directory where the logfiles are supposed to be. From /test/dummy it only uses the directory part. Then it takes all the files inside /test and compares the filenames with the --rotation argument. Those files which match are sorted by modification time. So check_logfiles knows which of the files in question was updated recently and the newest is considered to be the current logfile. And inside this file check_logfiles searches the criticalpattern.
Gerhard

How do I remove "X-TMASE-MatchedRID" key/value using "egrep -v"?

My file contains something like the below:
X-TM-AS-Product-Ver: IMSVA-8.2.0.1391-8.0.0.1202-22662.005
X-TM-AS-Result: No--0.364-7.0-31-10
X-imss-scan-details: No--0.364-7.0-31-10
X-TMASE-Version: IMSVA-8.2.0.1391-8.0.1202-22662.005
X-TMASE-Result: 10--0.363600-5.000000
X-TMASE-MatchedRID: 40jyuBT4FtykMGOaBzW2QbxygpRxo469FspPdEyOR1qJNv6smPBGj5g3
9Rgsjteo4vM1YF6AJbZcLc3sLtjOty5V0GTrwsKpl6V6bOpOzUAdzA5USlz33EYWGTXfmDJJ3Qf
wsVk0UbuGrPnef/I+eo9h73qb6JgVCR2fClyPE+EPh2lMKov3fdtvzshqXylpWZGeMhmJ7ScqBW
z6M5VHW/fngY5M/1HkzhvqqZL61o+ZdBoyruxjzQ==
This is my real text! I need to extract this line!
The existing code, written in the past by someone else, executes the below line:
cat $my_file | egrep -v "^(X-TM-AS)"
| egrep -v "X-imss-scan-details"
supposedly to remove all those key value lines which start with "X-".
The above piece of code has been working fine up until today because keys starting with X-TMASE has never been among the keys in the past. It has started to appear in the files today, and therefore it has caused the code to fail in extraction of the useful data.
Among the newly added keys, it seems to me that X-TMASE-MatchedRID is the one creating the headache for us, as it has a value which spans multiple lines:
X-TMASE-MatchedRID: 40jyuBT4FtykMGOaBzW2QbxygpRxo469FspPdEyOR1qJNv6smPBGj5g3
9Rgsjteo4vM1YF6AJbZcLc3sLtjOty5V0GTrwsKpl6V6bOpOzUAdzA5USlz33EYWGTXfmDJJ3Qf
wsVk0UbuGrPnef/I+eo9h73qb6JgVCR2fClyPE+EPh2lMKov3fdtvzshqXylpWZGeMhmJ7ScqBW
z6M5VHW/fngY5M/1HkzhvqqZL61o+ZdBoyruxjzQ==
Initially I tried the below:
cat $my_file | egrep -v "^(X-TM-AS)"
| egrep -v "X-imss-scan-details"
| egrep -v "^(X-TMASE-)"
But it didn't work. It didn't completely eliminate the value for X-TMASE-MatchedRID:
9Rgsjteo4vM1YF6AJbZcLc3sLtjOty5V0GTrwsKpl6V6bOpOzUAdzA5USlz33EYWGTXfmDJJ3Qf
wsVk0UbuGrPnef/I+eo9h73qb6JgVCR2fClyPE+EPh2lMKov3fdtvzshqXylpWZGeMhmJ7ScqBW
z6M5VHW/fngY5M/1HkzhvqqZL61o+ZdBoyruxjzQ==
This is my real text! I need to extract this line!
I wanted the output to be:
This is my real text! I need to extract this line!
That is, I don't want any metadata to be seen in the output.
Any idea how that can be achieved using egrep or any equivalent command?
If you just want to remove the first paragraph some other command is better, for example sed
sed '1,/^$/ d' "$my_file"

Find and Replace in bash Shell

Please advise on replacing a variable with latest date & time.
Here is my requirement.
FN='basename$0'
TS=`date '+%m/%d/%Y %T'`
QD='08/27/2014 16:25:45'
Then I have a query to run. After it has run, I need to take $TS (current system date & time) and assign it as a value to the $QD variable. This is a loop process and gets updated every time the script runs.
I've tried using sed but was not successful.
Please help.
Programatically modifying your script to have a different timestamp constant is absolutely and emphatically the wrong way to handle this problem.
Instead, when you want to mark that the query has been done, simply touch a file:
touch lastQueryCompletion
...and when you want to know when the query was last done, check that file's timestamp:
# with GNU date
QD=$(date -r lastQueryCompletion '+%m/%d/%Y %T')
# or, with Mac OS X stat
QD=$(stat -t '%Y/%m/%d %H:%M:%S' -f '%Sm' lastQueryCompletion)
Although you haven't mentioned the overall goal that you wish to accomplish, I have a feeling something like this would be more robust than using sed to update an existing script file.
FN='basename$0'
TS=date '+%m/%d/%Y %T'
# Load the latest QD (from the last run)
[ -e ~/.QD.saved ] && QD="`cat ~/.QD.saved`"
QD='08/27/2014 16:25:45'
...Later in that file...
#Save the new QD variable
echo '$(date +$FORMAT)'" > ~/.QD.saved
Although I'm not sure if sed is the tool you're looking, I believe that your command would have to go like this:
sed -i -r 's/^QD=.*/QD="$TS"/g' "$FN"
I'm assuming you're using gnu-sed, which with -i option tells to do an in-place substitution, rather then copying the input line to the pattern space.
Well, hope it helps.

Resources