Grep command in linux - linux

I need a little help to write a grep command in a single line to get first 15 files along with Size of the file in MB and sort by last modified time stamp.
if I execute below command
grep -il "SmapleString" *.log| xargs -t ls -ltr
Result:
-rw-r--r-- 1 Text1 Text2 5432278 27 mar 15:22 SampleFile.log
My Result:
grep -il "SmapleString" *.log| xargs -t ls -ltr| tr -s ' ' | cut -d' ' -f5-9|tail -r|head -15
5432278 27 mar 13:44 SampleFile.log
Required Output:
27 mar 13:44 SampleFile.log 5MB
OR
5MB 27 mar 13:44 SampleFile.log
please post your comments

You can try
ls -l -h -t | head -n 5
-t sort by modification time, newest first

Related

Get clean list of file sizes and names using SFTP in unix

I want to fetch list of files from a server using SFTP one by one only if their size is less than 1 GB.
I am running the following command :
$sftp -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname >list.txt <<EOF
cd upload/Example
ls -l iurygify*.zip
EOF
This results in:
$cat list.txt
sftp> cd upload/Example
sftp> ls -l iurygify*.zip
-rwxrwx--- 0 300096661 300026669 0 Mar 11 16:38 iurygify1.zip
-rwxrwx--- 0 300096661 300026669 0 Mar 11 16:38 iurygify2.zip
I could then use awk to calculate get the size and filename which I can save into logs for reference and then download only those files which meet the 1 GB criteria.
Is there any simpler approach to accomplish getting this file list and size? I want to avoid the junk entires of the prompt and commands in the list.txt and do not want to do this via expect command.
We are using SSH key authentication
You could place your sftp commands in a batch file and filter the output - no need for expect.
echo 'ls -l' > t
sftp -b t -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname | grep -v 'sftp>' >list.txt
Or take it a step further and filter out the "right size" in the same step:
sftp -b t -oIdentityFile=/home/user/.ssh/id_rsa -oPort=22 user#hostname | awk '$1!~/sftp>/&&$5<1000000000' >list.txt
Maybe using lftp instead of sftp ?
$ lftp sftp://xxx > list.txt <<EOF
> open
> ls -l
> EOF
$ cat list.txt
drwxr-xr-x 10 ludo users 4096 May 24 2019 .
drwxr-xr-x 8 root root 4096 Dec 20 2018 ..
-rw------- 1 ludo users 36653 Mar 31 19:28 .bash_history
-rw-r--r-- 1 ludo users 220 Mar 21 2014 .bash_logout
-rw-r--r-- 1 ludo users 362 Aug 16 2018 .bash_profile
...

Linux - multiple command execution using semicolon

I have a scenario where I need to execute date command and ls -lrth|wc -l command at the same time.
I read somewhere on google that I can do it in the way shown below using the semicolon
ls -lrth | wc -l | ; date
This works super fine!
But the problem is when I want to extract the output of this. This gives a two line output with the output of ls -lrth |wc -l in the first line and the second line has the date output like shown below
$ cat test.txt
39
Mon Oct 26 16:11:20 IST 2015
But it seems like linux is treating these two lines as if its on the same line.
I want this to be formatted to something like this
39,Mon Oct 26 16:11:20 IST 2015
For doing this I am not able to separately access these two lines (not even with tail or head).
Thanks in advance.
EDIT
Why I think linux is treating this as a same line because when I do this as shown below,
$ ls -lrth| wc -l;date | head -1
39
Mon Oct 26 16:24:07 IST 2015
The above reason is for my assumption of the one line thing.
Have you already tried using an echo?
echo $(ls | wc -l) , $(date)
(or something similar, I don't have a Linux emulator here)
If you want in your script
./script.sh
#!/bin/bash
a=$(ls -lrth | wc -l)
b=$(date)
out="$a,$b"
echo "$out"
EDIT
ls -lrth| wc -l;date | head -1
The semicolon simply separates two different commands ";"
Pipe to xargs echo -n (-n means no newline at end):
ls -lrth | wc -l | xargs echo -n ; echo -n ","; date
Testing:
$ ls -lrth | wc -l | xargs echo -n ; echo -n ","; date
11,Mon Oct 26 12:57:14 EET 2015

How do I find the latest date folder in a directory and then construct the command in a shell script?

I have a directory in which I will have some folders with date format (YYYYMMDD) as shown below -
david#machineX:/database/batch/snapshot$ ls -lt
drwxr-xr-x 2 app kyte 86016 Oct 25 05:19 20141023
drwxr-xr-x 2 app kyte 73728 Oct 18 00:21 20141016
drwxr-xr-x 2 app kyte 73728 Oct 9 22:23 20141009
drwxr-xr-x 2 app kyte 81920 Oct 4 03:11 20141002
Now I need to extract latest date folder from the /database/batch/snapshot directory and then construct the command in my shell script like this -
./file_checker --directory /database/batch/snapshot/20141023/ --regex ".*.data" > shardfile_20141023.log
Below is my shell script -
#!/bin/bash
./file_checker --directory /database/batch/snapshot/20141023/ --regex ".*.data" > shardfile_20141023.log
# now I need to grep shardfile_20141023.log after above command is executed
How do I find the latest date folder and construct above command in a shell script?
Look, this is one of approaches, just grep only folders that have 8 digits:
ls -t1 | grep -P -e "\d{8}" | head -1
Or
ls -t1 | grep -E -e "[0-9]{8}" | head -1
You could try the following in your script:
pushd /database/batch/snapshot
LATESTDATE=`ls -d * | sort -n | tail -1`
popd
./file_checker --directory /database/batch/snapshot/${LATESTDATE}/ --regex ".*.data" > shardfile_${LATESTDATE}.log
See BashFAQ#099 aka "How can I get the newest (or oldest) file from a directory?".
That being said, if you don't care for actual modification time and just want to find the most recent directory based on name you can use an array and globbing (note: the sort order with globbing is subject to LC_COLLATE):
$ find
.
./20141002
./20141009
./20141016
./20141023
$ foo=( * )
$ echo "${foo[${#foo[#]}-1]}"
20141023

How do I count how many files running in a Linux directory?

I have to list the files running in current directory and display the count of those listed
files .
[root#xxxx ~]# ps -eaf | grep perl
root 16278 16196 48 10:38 pts/1 00:40:19 perl filename.pl
root 16379 16293 0 12:02 pts/0 00:00:00 grep perl
[root#xxxx ~]# ps -AF | grep -i "/var/www/anand/file/sample" wc -l
1
[root#xxxx ~]#
There are 2 files running in same directory "sample" i have to count the no of files the above comment doesn't work please provide any solution.
$ ls | wc -l
Or when you need only regular files:
$ ls -l | grep ^- | wc -l
When you need the number of files that were started from the directory, say /home/user,
you must use something like:
$ ps aux | grep /[h]ome/user | wc -l
Note [] characters that you can place around any letter in the name.
ps -AF | grep -i "/usr/local/" | wc -l
"/usr/local/" is the directory you intrested in

How to check syslog in Bash on Linux?

In C we log this way:
syslog( LOG_INFO, "proxying %s", url );
In Linux how can we check the log?
How about less /var/log/syslog?
On Fedora 19, it looks like the answer is /var/log/messages. Although check /etc/rsyslog.conf if it has been changed.
By default it's logged into system log at /var/log/syslog, so it can be read by:
tail -f /var/log/syslog
If the file doesn't exist, check /etc/syslog.conf to see configuration file for syslogd.
Note that the configuration file could be different, so check the running process if it's using different file:
# ps wuax | grep syslog
root /sbin/syslogd -f /etc/syslog-knoppix.conf
Note: In some distributions (such as Knoppix) all logged messages could be sent into different terminal (e.g. /dev/tty12), so to access e.g. tty12 try pressing Control+Alt+F12.
You can also use lsof tool to find out which log file the syslogd process is using, e.g.
sudo lsof -p $(pgrep syslog) | grep log$
To send the test message to syslogd in shell, you may try:
echo test | logger
For troubleshooting use a trace tool (strace on Linux, dtruss on Unix), e.g.:
sudo strace -fp $(cat /var/run/syslogd.pid)
A very cool util is journalctl.
For example, to show syslog to console: journalctl -t <syslog-ident>, where <syslog-ident> is identity you gave to function openlog to initialize syslog.
tail -f /var/log/syslog | grep process_name
where process_name is the name of the process we are interested in
If you like Vim, it has built-in syntax highlighting for the syslog file, e.g. it will highlight error messages in red.
vi +'syntax on' /var/log/syslog
On some Linux systems (e.g. Debian and Ubuntu) syslog is rotated daily and you have multiple log files where two newest files are uncompressed while older ones are compressed:
$ ls -l /var/log/syslog*
-rw-r----- 1 root adm 888238 Aug 25 12:02 /var/log/syslog
-rw-r----- 1 root adm 1438588 Aug 25 00:05 /var/log/syslog.1
-rw-r----- 1 root adm 95161 Aug 24 00:07 /var/log/syslog.2.gz
-rw-r----- 1 root adm 103829 Aug 23 00:08 /var/log/syslog.3.gz
-rw-r----- 1 root adm 82679 Aug 22 00:06 /var/log/syslog.4.gz
-rw-r----- 1 root adm 270313 Aug 21 00:10 /var/log/syslog.5.gz
-rw-r----- 1 root adm 110724 Aug 20 00:09 /var/log/syslog.6.gz
-rw-r----- 1 root adm 178880 Aug 19 00:08 /var/log/syslog.7.gz
To search all the syslog files you can use the following commands:
$ sudo zcat -f `ls -tr /var/log/syslog*` | grep -i error | less
where zcat first decompresses and prints all syslog files (oldest first), grep makes a search and less is paging the results of the search.
To do the same but with the lines prefixed with the name of the syslog file you can use zgrep:
$ sudo zgrep -i error `ls -tr /var/log/syslog*` | less
$ zgrep -V | grep zgrep
zgrep (gzip) 1.6
In both cases sudo is required if syslog files are not readable by ordinary users.

Resources