what is the working of this command ls . | xargs -i -t cp ./{} $1 [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I am a new bee to bash scripting. while studying Advanced bash scripting I came across this command. I'm not understand how the command is working and what is the use of curly braces. Thanks in advance.

Your command:
ls . | xargs -i -t cp ./{} $1
could be divided into the following parts:
ls .
List the current directory (this will list all the files/directories but the hidden ones)
| xargs -i -t cp ./{} $1
Basically the xargs breaks the piped output (ls in this case) and provides each element in the list as input to the following command (cp in this case). The -t option is to show in the stderr what xargs is actually executing. The -i is used for string replacement. In this case since nothing has been provided it will substitute the {} by the input. $1 is the name of the destination where your files will be copied (I guess in this case it should be a directory for the command to make sense otherwise you will be copying all the files to the same destination).
So for example, if you have lets say a directory that has files called a, b, c. When you run this command it will perform the following:
cp ./a $1
cp ./b $1
cp ./c $1
NOTE:
The -i option is deprecated, -I (uppercase i) should be used instead

Related

How do we use the piped command in Perl? [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
This is the command -
$ find /var/opt/ -type f -mtime -1 -print0 | du -s |cut -f1
498172
When I run from the command line in Linux it gives output - the size.
I want to run the same command from Perl and need to capture the output in a variable.
I tried this:
my $cmd = "find /var/opt/ -type f -mtime -1 -print0 | du -s |cut -f1";
my #output = `$cmd`;
I am receiving an entirely different output - '\20' instead of 498172.
Can someone help me with what I am missing?
You can also calculate the size in the Perl script without needing to call the external command du:
use feature qw(say);
use strict;
use warnings;
use File::Find;
my $size = 0;
my $dir = '/var/opt';
find(sub {-f $_ && -M _ < 1 && do {$size += -s _ }}, $dir);
say int($size/1024), " KiB";
Note this reports the apparent size, not the disk usage. See How to get the actual directory size (out of du)? for more information.

Create mutiple files in multiple directories [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I 've got tree of folders like:
00 -- 0
-- 1
...
-- 9
...
99 -- 0
-- 1
...
-- 9
How is the simplest way to create in every single subfolders a file like:
/00/0/00_0.txt
and save to every files some kind of data?
I tried with touch and with loop but without success.
Any ideas how to make it very simple?
List all directories using globs. Modify the listed paths with sed so that 37/4 becomes 37/4/37_4.txt. Use touch to create empty files for all modified paths.
touch $(printf %s\\n */*/ | sed -E 's|(.*)/(.*)/|&\1_\2.txt|')
This works even if 12/3 was just a placeholder and your actual paths are something like abcdef/123. However it will fail when your paths contain any special symbols like whitespaces, *, or ?.
To handle arbitrary path names use the following command. It even supports linebreaks in path names.
mapfile -td '' a < <(printf %s\\0 */*/ | sed -Ez 's|(.*)/(.*)/|&\1_\2.txt|')
touch "${a[#]}"
You may use find and then run commands using -exec
find . -type d -maxdepth 2 -mindepth 2 -exec bash -c 'f={};
cmd=$(echo "${f}/${f%/*}_${f##*/}.txt"); touch $cmd' \;
the bash substitution ${f%/*}_${f##*/} replaces the last / with _

How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question appears to be off-topic because it lacks sufficient information to diagnose the problem. Describe your problem in more detail or include a minimal example in the question itself.
Closed 8 years ago.
Improve this question
How do I create a bash file that creates a symbolic link (linux) for each file moved from Folder A to Folder B.
However, this would be done selecting the 150 biggest current files from Folder A.
Can probably write it in a one liner, but easier in a simple bash script like
#!/bin/bash
FolderA="/some/folder/to/copy/from/"
FolderB="/some/folder/to/copy/to/"
while read -r size file; do
mv -iv "$file" "$FolderB"
ln -s "${FolderB}${file##*/}" "$file"
done < <(find "$FolderA" -maxdepth 1 -type f -printf '%s %p\n'| sort -rn | head -n150)
Note ${file##*/} removes everything before the last /, per
${parameter##word}
Remove matching prefix pattern. The word is expanded to produce a pattern just as in pathname expansion. If the
pattern matches the beginning of the value of parameter, then the result of the expansion is the expanded value of
parameter with the shortest matching pattern (the ``#'' case) or the longest matching pattern (the ``##'' case)
deleted. If parameter is # or *, the pattern removal operation is applied to each positional parameter in turn, and
the expansion is the resultant list. If parameter is an array variable subscripted with # or *, the pattern removal
operation is applied to each member of the array in turn, and the expansion is the resultant list.
Also, it may seem like a good idea to just do for file in $(command), but process substitution and while/read works better in general to avoid word-splitting issues like splitting up files with spaces, etc...
As with any task, break it up into smaller pieces, and things will fall into place.
Select the biggest 150 files from FolderA
This can be done with du, sort, and awk, the output of which you stuff into an array:
du -h /path/to/FolderA/* | sort -h | head -n150 | awk '{print $2}'
Move files from FolderA into FolderB
Take the list from the last command, and iterate through it:
for file in ${myarray[#]}; do mv "$file" /path/to/FolderB/; done
Make a symlink to the new location
Again, just iterate through the list:
for file in ${myarray[#]; do ln -s "$file" "/path/to/FolderB/${file/*FolderA\/}"; done

How to color ls - l command's columns [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I wonder if possible to have ls -l colored. I'm not talking about --color, of course.
I found an useful alias for display octal permission in an ls -l command, now, it's possible to color it? In the same way, is possible when I do ls -l, display only permissions in red or something?
I don't know how to use color code, but grep has --color option
If the first line of ls -l is not important to you, you can consider using grep
ls -l | grep --color=always '[d-][r-][w-][x-][r-][w-][x-][r-][w-][x-]'
or in shorter form:
ls -l | grep --color=always '[d-]\([r-][w-][x-]\)\{3\}'
You can use several utilities to do it, like piping the output of ls (OPTIONS...) to supercat (after defiining the rules). Or to highlight (after defining the rules).
Or use awk/sed to pretty print based on regexes. E.g. with gensub in awk, you can insert ANSI color codes to the output...
The first thing that came into my mind is that you can use --color=auto for this:
ls -l --color=auto
And it can be handy to create an alias:
alias lls='ls -l --color=auto'
However I see you don't want that. For that we have to create a more complex function that use the echo -e "colours...":
print_line () {
red='\e[0;31m'
endColor='\e[0m'
first=${1%% *}
rest=${1#* }
echo -e "${red}$first${endColor} $rest"
}
lls () {
IFS=$'\n'; while read line;
do
# echo "$line"
print_line $line
done <<< "$(find $1 -maxdepth 1 -printf '%M %p\n')"
}
If you store them in ~/.bashrc and source it (. ~/.bashrc) then whenever you do lls /some/path it will execute these functions.
If you're asking if there is an option to specify custom column-specific colors in ls, I don't think so. But you can do something like.
> red() { red='\e[0;31m'; echo -ne "${red}$1 "; tput sgr0; echo "${*:2}"; }
> while read -r line; do red $line; done < <(ls -l)

how to delete a line that contains a word in all text files of a folder? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
So, in linux, I have a folder with lots of big text files.
I want to delete all the lines of these files that contain a specific keyword.
Is there any easy way to do that across all files?
There already many similar answers. I'd like to add that if you want to match this is a line containing a keyword but not this is a line containing someoneelseskeyword, then you had better added brackets around the word:
sed -i '/\<keyword\>/d' *.txt
I cannot test this right now, but it should get you started
find /path/to/folder -type f -exec sed -i '/foo/d' {} ';'
find files in the directory /path/to/folder
find lines in these files containing foo
delete those lines from those files
Sure:
for x in files*
do
grep -v your_pattern "$x" > x && mv x "$x"
done
try this:
find your_path_filename |xargs sed -i '/key_word/d'
sed -i '/keyword/d' *.txt -- run this in your directory.
sed - stream editor , use it here for deleting lines in individual files
-i option : to make the changes permenent in the input files
'/keywprd/' : specifies the pattern or the key to be searched in the files
option d : informs sed that matching lines need to be deleted.
*.txt : simply tells sed to use all the text files in the directory as input for
processing , you can specify a individual or a extension like *.txt the way i did.

Resources