How to move the n number of files which is inside the directory called directory1 to a new directory - linux

I have a list of files inside directory1 and I want to move n number of files from that directory1 to directory2. When I try xargs like this it did not work.
ls -ltr | head -20 | mv xargs /directory2
Why we can't use xargs in middle? how to move n number of files to another directory in command line ?

First, you should not use ls in scripts. ls is for humans, not for scripting. Second, the last command of your pipe tries to move a file named xargs to /directory2. Third, xargs appends its inputs to the command. Even if you swap mv and xargs this will lead to execute mv /directory2 file1 file2 file3... file20, instead of what you want: mv file1 file2 file3... file20 /directory2. Finally, the -l option of ls will print more than just the file name (permissions, owner, group...); you cannot use it as mv argument.
Your ls options suggest that you want to move the 20 oldest files. Try:
while read -d '' -r time file; do
mv -- "$file" "/directory2"
done < <( stat --printf '%Y %n\0' * | sort -zn | head -zn20 )
stat --printf '%Y %n\0' * prints the last modification time as seconds since epoch, followed by the file name, of all files in the current directory. Each record is terminated by the NUL character, the only character that cannot be found in a file name. The -z option of sort and head and the -d '' option of read instruct these utilities to use NUL as the record separator instead of the default (newline). This way, the script should work even if some of your file names contain newlines.
If you prefer xargs:
stat --printf '%Y\t%n\0' * | sort -zn | head -zn20 | cut -zf2- |
xargs -0I{} mv -- {} /directory2

You can try this:
n=20
cd /path/to/directory1 || exit
files=(*)
mv -- "${files[#]:0:n}" /path/to/directory2
Also, you may consider reading this article: Why you shouldn't parse the output of ls

Related

Grep regular files in a linux File System and show their content

How do I display the content of files regular files matched with grep command? For example I grep a directory in order to see the regular files it has. I used the next line to see the regular files only:
ls -lR | grep ^-
Then I would like to display the content of the files found there. How do I do it?
I would do something like:
$ cat `ls -lR | egrep "^-" | rev | cut -d ' ' -f 1 | rev`
Use ls to find the files
grep finds your pattern
reverse the whole result
cut out the first file separated field to get the file name (files with spaces are problematic)
reverse the file name back to normal direction
Backticks will execute that and return the list of file names to cat.
or the way I would probably do it is use vim to look at each file.
$ vim `ls -lR | egrep "^-" | rev | cut -d ' ' -f 1 | rev`
It feels like you are trying to find only the files recursively. This is what I do in those cases:
$ vim `find . -type f -print`
There are multiple ways of doing it. Would try to give you a few easy and clean ways here. All of them handle filenames with space.
$ find . -type f -print0 | xargs -0 cat
-print0 adds a null character '\0' delimiter and you need to call xargs -0 to recognise the null delimiter. If you don't do that, whitespace in the filename create problems.
e.g. without -print0 filenames: abc 123.txt and 1.inc would be read as three separate files abc, 123.txt and 1.inc.
with -print0 this becomes abc 123.txt'\0' and 1.inc'\0' and would be read as abc 123.txt and 1.inc
As for xargs, it can accept the input as a parameter. command1 | xargs command2 means the output of command1 is passed to command2.
cat displays the content of the file.
$ find . -type f -exec echo {} \; -exec cat {} \;
This is just using the find command. It finds all the files (type f), calls echo to output the filename, then calls cat to display its content.
If you don't want the filename, omit -exec echo {} \;
Alternatively you can use cat command and pass the output of find.
$ cat `find . -type f -print`
If you want to scroll through the content of multiple files one by one. You can use.
$ less `find . -type f -print`
When using less, you can navigate through :n and :p for next and previous file respectively. press q to quit less.

Grep inside files returned from ls and head

I have a directory with a large number of files. I am attempting to search for text located in at least one of the files. The text is likely located in one of the more recent files. What is the command to do this? I thought it would look something like ls -t | head -5 | grep abaaba.
For example, if I have 5 files returned from ls -t | head -5:
- file1, file2, file3, file4, file5, I need to know which of those files contains abaaba.
It's not really clear what you are trying to do. But I assume the efficiency is your main goal. I would use something like:
ls -t | while read -r f; do grep -lF abaaba "$f" && break;done
This will print only first file containing the string and stops the search. If you want to see actual lines use -H instead of -l. And if you have regex instead of mere string drop -F which will make grep run slower however.
ls -t | while read -r f; do grep -H abaaba "$f" && break;done
Of course if you want to continue the search I'd suggest dropping "&& break".
ls -t | while read -r f; do grep -HF abaaba "$f";done
If you have some ideas about the time frame, it's good idea to try find.
find . -maxdepth 1 -type f -mtime -2 -exec grep -HF abaaba {} \;
You can raise the number after -mtime to cover more than last 2 days.
If you're just doing this interactively, and you know you don't have spaces in your filenames, then you can do:
grep abaaba $(ls -t | head -5) # DO NOT USE THIS IN A SCRIPT
If writing this in an alias or for repeat future use, do it the "proper" way that takes more typing, but that doesn't break on spaces and other things in filenames.
If you have spaces but not newlines, you can also do
(IFS=$'\n' grep abaaba $(ls -t | head -5) )

How to copy the top 10 most recent files from one directory to another?

Al my html files reside here :
/home/thinkcode/myfiles/html/
I want to move the newest 10 files to /home/thinkcode/Test
I have this so far. Please correct me. I am looking for a one-liner!
ls -lt *.htm | head -10 | awk '{print "cp "$1" "..\Test\$1}' | sh
ls -lt *.htm | head -10 | awk '{print "cp " $9 " ../Test/"$9}' | sh
cp seems to understand back-ticked commands. So you could use a command like this one to copy the 10 latest files to another folder like e.g. /test:
cp `ls -t *.htm | head -10` /test
Here is a version which doesn't use ls. It should be less vulnerable to strange characters in file names:
find . -maxdepth 1 -type f -name '*.html' -print0
\| xargs -0 stat --printf "%Y\t%n\n"
\| sort -n
\| tail -n 10
\| cut -f 2
\| xargs cp -t ../Test/
I used find for a couple of reasons:
1) if there are too many files in a directory, bash will balk at the wildcard expansion*.
2) Using the -print0 argument to find gets around the problem of bash expanding whitespace in a filename in to multiple tokens.
* Actually, bash shares a memory buffer for its wildcard expansion and its environment variables, so it's not strictly a function of the number of file names, but rather the total length of the file names and environment variables. Too many environment variables => no wildcard expansion.
EDIT: Incorporated some of #glennjackman's improvements. Kept the initial use of find to avoid the use of the wildcard expansion which might fail in a large directory.
ls -lt *.html | head -10 | awk '{print $NF}' | xargs -i cp {} DestDir
In the above example DestDir is the destination directory for the copy.
Add -t after xargs to see the commands as they execute. I.e., xargs -i -t cp {} DestDir.
For more information check out the xargs command.
EDIT: As pointed out by #DennisWilliamson (and also checking the current man page) re the -i option This option is deprecated; use -I instead..
Also, both solutions presented depend on the filenames in questions don't contain any blanks or tabs.

Execute command for every file in the current dir

How can i execute a certain command for every file/folder in the current folder?
I've started with this as a base script, but this seems that its only working when using temporary files, and i dont really like the ideea. Is there any other way?
FOLDER=".";
DIRS=`ls -1 "$FOLDER">/tmp/DIRS`;
echo >"/tmp/DIRS1";
while read line ; do
SIZE=`du "$FOLDER$line"`;
echo $SIZE>>"/tmp/DIRS1";
done < "/tmp/DIRS";
For anyone interested, i wanted to make a list of folders, sorted by their size. Here is the final result
FOLDER="$1";
for f in $FOLDER/*; do
du -sb "$f";
done | sort -n | sed "s#^[0-9]*##" | sed "s#^[^\./]*##" | xargs -L 1 du -sh | sed "s|$FOLDER||";
which leads to du -sb $FOLDER/* | sort -n | sed "s#^[0-9]*##" | sed "s#^[^\./]*##" | xargs -L 1 du -sh | sed "s|$FOLDER||";
Perhaps xargs, which reinvokes the command specified after it for each additional line of parameters received on stdin...
ls -1 $FOLDER | xargs du
But, in this case, why not...
du *
...? Or...
for X in *; do
du $X
done
(Personally, I use zsh, where you can modify the glob pattern to only find say regular files, or only directories, only symlinks etc - I'm pretty sure there's something similar in bash - can dig for details if you need that).
Am I missing part of your requirement?
The find command will let you execute a command for each item it finds, too. Without further arguments it will find all files and folders in the current directory, like this:
$ find -exec du -h {} \;
The {} part is the "variable" where the match is placed, here as the argument to du. \; ends the command.
It is useless to parse output of ls to cycle over files. Bash can do it with wildcard expansion.
Storing the result of du in a variable to output it to a file is also a useless use of a variable.
What I suggest:
for i in ./tmp/DIRS/*
do
du "$i" >> "/tmp/DIRS1"
done
What's wrong with something like this?
function process() {
echo "Processing $1"
}
for i in *
do
process $i
done
You can put all the "work" you want done inside the function process. This will do it for your current directory.
This works for every file in the current directory:
do
/usr/local/mp3unicode/bin/mp3unicode -s cp1251 --id3v2-encoding unicode "$file"
done
The invocation of action exec can be done by two ways:
find . -type d -exec du -ch {} \;
find . -type d -exec du -ch {} +
In the first command, the substitution {} occurs for each folder found. In the second one all the results of find are passed to exec at once, which matters, to obtain a final total.
https://www.eovao.com/en/a/bash%20find%20exec%20linux/2/bash-execute-action-on-find-(-exec)-for-each-file

Copy the three newest files under one directory (recursively) to another specified directory

I'm using bash.
Suppose I have a log file directory /var/myprogram/logs/.
Under this directory I have many sub-directories and sub-sub-directories that include different types of log files from my program.
I'd like to find the three newest files (modified most recently), whose name starts with 2010, under /var/myprogram/logs/, regardless of sub-directory and copy them to my home directory.
Here's what I would do manually
1. Go through each directory and do ls -lt 2010*
to see which files starting with 2010 are modified most recently.
2. Once I go through all directories, I'd know which three files are the newest. So I copy them manually to my home directory.
This is pretty tedious, so I wondered if maybe I could somehow pipe some commands together to do this in one step, preferably without using shell scripts?
I've been looking into find, ls, head, and awk that I might be able to use but haven't figured the right way to glue them together.
Let me know if I need to clarify. Thanks.
Here's how you can do it:
find -type f -name '2010*' -printf "%C#\t%P\n" |sort -r -k1,1 |head -3 |cut -f 2-
This outputs a list of files prefixed by their last change time, sorts them based on that value, takes the top 3 and removes the timestamp.
Your answers feel very complicated, how about
for FILE in find . -type d; do ls -t -1 -F $FILE | grep -v "/" | head -n3 | xargs -I{} mv {} ..; done;
or laid out nicely
for FILE in `find . -type d`;
do
ls -t -1 -F $FILE | grep -v "/" | grep "^2010" | head -n3 | xargs -I{} mv {} ~;
done;
My "shortest" answer after quickly hacking it up.
for file in $(find . -iname *.php -mtime 1 | xargs ls -l | awk '{ print $6" "$7" "$8" "$9 }' | sort | sed -n '1,3p' | awk '{ print $4 }'); do cp $file ../; done
The main command stored in $() does the following:
Find all files recursively in current directory matching (case insensitive) the name *.php and having been modified in the last 24 hours.
Pipe to ls -l, required to be able to sort by modification date, so we can have the first three
Extract the modification date and file name/path with awk
Sort these files based on datetime
With sed print only the first 3 files
With awk print only their name/path
Used in a for loop and as action copy them to the desired location.
Or use #Hasturkun's variant, which popped as a response while I was editing this post :)

Resources