Unable to see the whole content of displayed log as a result of tail option - linux

I am using Putty to log on to the Linux Server and see the logs .
I used the command tail -f -n 1000 MyLog.log
It started displaying the last 1000 lines , On to the displayed content but i could the cursor to the whole displayed content , i could able to move only within the buffer content .
Please tell me is there any other option so that i could able to see the whole content (That is 1000 lines )
Thank you .
Where do i need to change the settings

This might be because of Putty, which has by default 200 lines of scrollback.
You can change this by going to Putty - Window - Lines of scrollback.

You can use pagers, less is more common!
Run like
tail -f -n REQD_LINE_NOS FILENAME | less

Related

grep empty output file

I made a shell script the purpose of which is to find files that don't contain a particular string, then display the first line that isn't empty or otherwise useless. My script works well in the console, but for some reason when I try to direct the output to a .txt file, it comes out empty.
Here's my script:
#!/bin/bash
# takes user input.
echo "Input substance:"
read substance
echo "Listing media without $substance:"
cd media
# finds names of files that don't feature the substance given, then puts them inside an array.
searchresult=($(grep -L "$substance" *))
# iterates the array and prints the first line of each - contains both the number and the medium name.
# however, some files start with "Microorganisms" and the actual number and name feature after several empty lines
# the script checks for that occurence - and prints the first line that doesnt match these criteria.
for i in "${searchresult[#]}"
do
grep -m 1 -v "Microorganisms\|^$" $i
done >> output.txt
I've tried moving the >>output.txt to right after the grep line inside the loop, tried switching >> to > and 2>&1, tried using tee. No go.
I'm honestly feeling utterly stuck as to what the issue could be. I'm sure there's something I'm missing, but I'm nowhere near good enough with this to notice. I would very much appreciate any help.
EDIT: Added files to better illustrate what I'm working with. Sample inputs I tried: Glucose, Yeast extract, Agar. Link to files [140kB] - the folder was unzipped beforehand.
The script was given full permissions to execute. I don't think the output is being rewritten because even if I don't iterate and just run a single line of the loop, the file is empty.

Problem of running command from Rundeck (Linux)

On a linux server I am running following command with any error and getting the result.
xxxxx#server1 ~]$ grep -o "\-w.*%" /etc/sysconfig/nrpe-disk
-w 15% -c 7%
[xxxxx#server1 ~]$
I want to run same command from Rundeck's command line interface with same xxx user which has sudo rights too.
Command executed from rundeck gives option '.' invalide error:
option invalide -- '.'
Utilisation : grep [OPTION]... MOTIF [FICHIER].
I tried many times with different ways such as escaping . sign, running it with sudo, with absolute path, double quotes - single quotes etc. Still I am receiving same output however, in the server command works locally. What's the way to fix it ?
You can do that putting that on an inline-script ("Script" step) or call an external script with the command content ("Script file or URL" step).
Another way is to use cat tool to print the file and capture the output using log filter (Click on the tiny Gear icon at the left of the step > Click on "Add Log Filter" > Select "Key/value data" and in pattern use with this regex: .*(-w .*%).*, put a name of the data - eg: diskdata - and click on "Log data" checkbox) and you get the output that you want, you can print that value using echo ${data.diskdata} in next step. Check.

Deleting text in a file with "sed" isn't working as expected

I am currently working on a little script for the "nslookup"-command and in my testing I encountered a problem I don't understand. In my script a .txt file is automatically created and the user can input some text to it if he wishes to. He can also delete specific lines in the document. I tried writing it with "sed" but it doesn't seem to be working correctly.
Here the menu from the terminal output:
Domains:
1) new_domain
2) domain
3) Create new Domain
4) Delete a Domain
5) Quit
Input>
The first two numbers also representing the line of each text.
The code for deleting a domain is the following:
filename=domains.txt
old_filename=domains_backup.txt
read -p "Which domain-number shall be deleted?: " num_input
mv $filename $old_filename
sed "/$num_input/d" < $old_filename > $filename
rm $old_filename
But when executing that script and the user wants to delete line 2 (domain) the text-file remains the same and is not updated.
When I try the same only using the terminal everything works fine.
Is there something I'm missing?
To delete a line by its line number you will want to use $num_input d rather than /$num_input/d : the second one matches lines that contain $num_input.
As a side note, if you use GNU sed you could let it handle the backup :
sed -i.backup "$num_input d" domains.txt
This would create a copy of the untouched domains.txt as domains.txt.backup (or whatever suffix you specify after -i) and update the domains.txt file.

How to use sed command to delete lines without backup file?

I have large file with size of 130GB.
# ls -lrth
-rw-------. 1 root root 129G Apr 20 04:25 syslog.log
So I need to reduce file size by deleting line which starts with "Nov 2" , So I have given the following command,
sed -i '/Nov 2/d' syslog.log
So I can't edit file using VIM editor also.
When I trigger SED command , its creating backup file also. But I don't have much space in root. Please try to give alternate solution to delete particular line from this file without increasing space in server.
It does not create a real backup file. sed is a stream editor. When applied to a file with option -i it will stream that file through the sed process, write the output to a new file (a temporary one), when everything is done, it will rename the new file to the original name.
(There are options to create backup files also, but you didn't give them, so I won't mention that further.)
In your case you have a very large file and don't want to create any copy, however temporary. For this you need to open the file for reading and writing at the same time, then your sed process can overwrite the original. After this, you will have to truncate the file at the end of the writing.
To demonstrate how this can be done, we first perform a test case.
Create a test file, containing lots of lines:
seq 0 999999 > x
Now, lets say we want to remove all lines containing the digit 4:
grep -v 4 1<>x <x
This will open the file for reading and writing as STDOUT (1), and for reading as STDIN. The grep command will read all lines and will output only the lines not containing a 4 (option -v).
This will effectively overwrite the beginning of the original file.
You will not know how long the output is, so after the output the original contents of the file will appear:
…
999991
999992
999993
999995
999996
999997
999998
999999
537824
537825
537826
537827
537828
537829
…
You can use the Unix tool truncate to shorten your file manually afterwards. In a real scenario you will have trouble finding the right spot for this, so it makes sense to count the number of bytes written (using wc):
(Don't forget to recreate the original x for this test.)
(grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c
This will preform the step above and additionally print out the number of bytes written to the terminal, in this example case the output will be 3653658. Now use truncate:
truncate -s 3653658 x
Now you have the result you want.
If you want to do this in a script, i. e. without interaction, you can use this:
length=$((grep -v 4 <x | tee /dev/stderr 1<>x) |& wc -c)
truncate -s "$length" x
I cannot guarantee that this will work for files >2GB or >4GB on your machine; depending on your operating system (32bit?) and the versions of the installed tools you might run into largefile issues. I'd perform tests with large files first (>4GB as this is typically a limit for many things) and then cross your fingers and give it a try :)
Some caveats you have to keep in mind:
Of course, nobody is supposed to append log entries to that log file while the procedure is running.
Also, any abort during the running of the process (power failure, signal caught, etc.) will leave the file in an undefined state. But re-running the command again after such a mishap will in most cases produce the correct output; some lines might be doubled, but not more than a single line should be corrupted then.
The output must be smaller than the input, of course, otherwise the writing will overtake the reading, corrupting the whole result so that lines which should be there will be missing (or truncated at the start).

Rollover shell script

Assuming a shell script(commands.sh) with few commands.
I need to write a script which sends the output of commands executed by commands.sh to a file f1.csv
if file size exceeds 1MB then the output flowing should go to file f2.csv
if the file size exceeds 1 mb again here,the output flowing should go to file f3.csv
if f3.csv exceeds the size 1mb,then the older f1 should be deleted and again new file f1 should be created,
output flowing should be to written to f1. This process should go on .
I can write the crontab file, just the shell script is a bit tricky
I have been experimenting:
#!/usr/bin/env bash
PREFIX="f"
# Maximum size after which you want a new file in bytes
MAX_SIZE=1048576
LAST_FILE=`ls "$prefix"*.csv | tail -1`
# Check if file exists and if it does not, create it.
if [[ -z "$LAST_FILE" ]]
then
LAST_FILE=$PREFIX"1.csv"
touch $LAST_FILE
fi
LAST_FILE_NO=`echo $LAST_FILE | sed s/$PREFIX/''/ | sed s/.csv/''/`
LAST_FILE_SIZE=`stat -c %s $LAST_FILE`
if [ `stat -c %s $LAST_FILE` -lt 200 ]
then
`/bin/sh ./sam.sh >> $LAST_FILE`
else
UPCOMING_FILE_NO=$((LAST_FILE_NO+1))
`/bin/sh ./sam.sh >> $PREFIX$UPCOMING_FILE_NO.csv`
fi
help is appreciated guys.
EDIT: Have got the secondary shell script to work too...
Now if anyone could help me with resetting after 3 files are done and starting from f1.
thanks
It sounds like you'd be better off using logrotate, depending on how your script is running. If you are running 'commands.sh' on a cron, you can have logrotate rotate out the logs. There is a good guide on logrotate here:
http://linuxers.org/howto/howto-use-logrotate-manage-log-files
If your commands.sh isn't going to be on a cron, meaning it's not a regular time interval that triggers it, you could manually set up a log rotation at the beginning of your script. I once had to do something similar. I found this guide really useful:
http://wazem.blogspot.com/2013/11/simple-bash-log-rotate-function.html

Resources