I have several files with a.dat, a.txt, a.mp3, b.dat, b.txt, b.mp3, b.zip, b.rar, c.mp3 and so on. I want to rename all files with basename "a" to basename "x".
Such that files become x.dat, x.txt, x.mp3, b.dat,b.txt,b.mp3,b.zip,b.rar,c.mp3` and so on.
In Linux this can be done via terminal but requires lot of typing. I want a script to do the task for me.
You don't need a script when you have the rename (or prename on some systems) command.
It allows groups of files to be renamed using arbitrarily complex Perl regular expressions:
pax> ll qq*
-rwxr-xr-x 1 pax pax 4574 Apr 13 17:03 qq
-rw-r--r-- 1 pax pax 213 Apr 13 17:03 qq.c
-rw-r--r-- 1 pax pax 804 Apr 6 12:23 qq.cpp
-rw-r--r-- 1 pax pax 258 Apr 5 21:33 qq.m
-rw-r--r-- 1 pax pax 904 Apr 6 10:35 qq.o
-rw-r--r-- 1 pax pax 241 Apr 6 10:50 qq.py
-rw-r--r-- 1 pax pax 769 Apr 7 09:47 qq.txt
pax> rename 's/qq/xyzzy/' qq*
pax> ll qq*
ls: cannot access qq*: No such file or directory
pax> ll xyzzy*
-rwxr-xr-x 1 pax pax 4574 Apr 13 17:03 xyzzy
-rw-r--r-- 1 pax pax 213 Apr 13 17:03 xyzzy.c
-rw-r--r-- 1 pax pax 804 Apr 6 12:23 xyzzy.cpp
-rw-r--r-- 1 pax pax 258 Apr 5 21:33 xyzzy.m
-rw-r--r-- 1 pax pax 904 Apr 6 10:35 xyzzy.o
-rw-r--r-- 1 pax pax 241 Apr 6 10:50 xyzzy.py
-rw-r--r-- 1 pax pax 769 Apr 7 09:47 xyzzy.txt
There is a small program called mmv which does the job:
$ touch a.dat a.txt a.mp3 b.dat b.txt b.mp3 b.zip b.rar c.mp3
$ mmv "a.*" "x.#1"
$ ls
b.dat b.mp3 b.rar b.txt b.zip c.mp3 x.dat x.mp3 x.txt
mmv comes with typically any Linux distribution.
I ll suggest a way, I think this could work. This is a little weird way I think, don't laugh. 10th std math.
First you grep all the names in the folder using the combination of ls and grep command
ls | grep ^a this will list you with all the files with a as the first letter. You can use a regular expression with this if you need only files with a as the name.
Read the file names one by one with a while loop
Store the file name into a variable (say $name1).And using sed and awk, extract the second part of the filename(ie. remove the dots into spaces and print the second coloumn) store this in another variable (say $extn).
you can rename the files using the first name stored in the variable ($name1) for specifying which file and use the second variable to specify the extension ($extn) for the new name...
This is a loooong route :) I am sure this will work. Try it. Consider this as the algorithm and script.I am sorry that I dint provide a script.Little bit lazy.
Related
I would like to make a linux command which will keep only the last 5 recent files, but these files must start with REF, and delete the other files also which start with REF, but not touch the other files.
For example: in my folder, I have:
-rw-r--r-- 1 0 Jan 1, 2022 File_0
-rw-r--r-- 1 0 Jan 1, 2022 REF_1
-rw-r--r-- 1 0 Feb 1 2022 REF_2
-rw-r--r-- 1 0 March 1, 2022 REF_3
-rw-r--r-- 1 0 Apr 1, 2022 REF_4
-rw-r--r-- 1 0 May 1, 2022 REF_5
-rw-r--r-- 1 0 June 1, 2022 REF_6
-rw-r--r-- 1 0 Jul 1, 2022 file_7
-rw-r--r-- 1 0 1 Aug 2022 file_8
-rw-r--r-- 1 0 Sep 1, 2022 REF_9
The command should remove only:
-rw-r--r-- 1 0 Jan 1, 2022 REF_1
-rw-r--r-- 1 0 Feb 1 2022 REF_2
... and should keep the other files. I tried ls -t REF* | head -n+4 | xargs rm REF* but this command deletes all files that start with REF!
What command can I use?
Using zsh (available on many Linux distributions and also on AIX from IBM's AIX Toolbox for Open Source Software), you could simply:
rm REF*(om[6,-1])
This uses zsh's powerful globbing (filename generation) abilities to:
gather the list of files starting with REF
sort the files by their modification time (newest first) with (om...)
keep the five newest files by selecting the 6th and remaining files with [6,-1]
pass that list of files to rm
Test it first with a simple print -l REF*(om[6,-1]) to see which files would be collected.
See Glob Qualifiers for more about zsh's glob qualifiers.
Logrotate was mentioned, why not use it?
It can't handlle the separator being an underscore (_).
$ cat log.conf
REF* {
rotate 5
}
% logrotate log.conf
error: log.conf:1 keyword 'REF' not properly separated, found 0x2a
Here is a complete script, with safe filenames.
find . -name 'REF_*' -print0 | \
xargs -0 stat -c "%Y %n" | \
sort -n | \
head -n+3 | \
sed -e 's/^[0-9]* //' | \
tr '\12' '\0' | \
xargs -0 rm
First, let us use find with nulls to fetch the list of files.
Then use xargs to use stat to prepend the unix time stamp.
Use sort to sort by oldest first.
Use head -n+3 to find all except the last 5.
Use sed to strip the temporary unix time stamp.
Use tr to convert the returns from stat to nulls again.
Finally xargs to delete the unwanted files.
This question already has answers here:
How do I prevent tar from overwriting an existing archive?
(2 answers)
Closed 4 years ago.
I made a mistake that forgot to assign argument of the file name when using tar command like below:
[john#foobar foo]$ ll
total 0
-rw-rw-r-- 1 john john 0 7月 4 19:20 2018 file1
-rw-rw-r-- 1 john john 0 7月 4 19:20 2018 file2
-rw-rw-r-- 1 john john 0 7月 4 19:20 2018 file3
[john#foobar foo]$ tar -cvzf file1 file2 file3
file2
file3
[john#foobar foo]$ ll
total 4
-rw-rw-r-- 1 john john 130 7月 4 19:21 2018 file1
-rw-rw-r-- 1 john john 0 7月 4 19:20 2018 file2
-rw-rw-r-- 1 john john 0 7月 4 19:20 2018 file3
When forget to assign archive file name, tar overwrites and creates the archive file1.
I checked man tar, but it seems there is no option such as cp shows a prompt when same name file already exists.
To create a foolproof script is a possible way?
From man tar:
-k, --keep-old-files
don’t replace existing files when extracting, treat them as errors
--skip-old-files
don’t replace existing files when extracting, silently skip over them
I've scoured various message boards to understand why I can't use a variable as input to a command in certain scenarios. Is it a STDIN issue/limitation? Why does using echo and here strings fix the problem?
For example,
~$ myvar=$(ls -l)
~$ grep Jan "$myvar"
grep: total 9
-rwxr-xr-x 1 jvp jvp 561 Feb 2 23:59 directoryscript.sh
-rw-rw-rw- 1 jvp jvp 0 Jan 15 10:30 example1
drwxrwxrwx 2 jvp jvp 0 Jan 19 21:54 linuxtutorialwork
-rw-rw-rw- 1 jvp jvp 0 Jan 15 13:08 littlefile
-rw-rw-rw- 1 jvp jvp 0 Jan 19 21:54 man
drwxrwxrwx 2 jvp jvp 0 Feb 2 20:33 projectbackups
-rwxr-xr-x 1 jvp jvp 614 Feb 2 20:41 projectbackup.sh
drwxrwxrwx 2 jvp jvp 0 Feb 2 20:32 projects
-rw-rw-rw- 1 jvp jvp 0 Jan 19 21:54 test1
-rw-rw-rw- 1 jvp jvp 0 Jan 19 21:54 test2
-rw-rw-rw- 1 jvp jvp 0 Jan 19 21:54 test3: File name too long
As you can see I get the error... 'File name too long'
Now, I am able to get this to work by using either:
echo "$myvar" | grep Jan
grep Jan <<< "$myvar"
However, I'm really after a better understanding of why this is the way it is. Perhaps I am missing something about basics of command substitution or what is an acceptable form of STDIN.
The grep utility can operate...
On files the names of which are provided on the command line, after the regular expression used for matching
On a stream supplied on its standard input.
You are doing this :
myvar=$(ls -l)
grep Jan "$myvar"
This provides the content of variable myvar as an argument to the grep command, and since it is not a file name, it does not work.
There are many ways to achieve your goal. Here are a few examples.
Use the content of the variable as a stream connected to the standard input of grep, with one of the following methods (all providing the same output) :
grep Jan <<<"$myvar"
echo "$myvar" | grep Jan
grep Jan < <(echo "$myvar")
Avoid the variable to start with, and send the output of ls directly to grep :
ls -l | grep Jan
grep Jan < <(ls -l)
Provide grep with an expression that actually is a file name :
grep Jan <(ls -l)
The <(ls -l) expression is syntax that causes a FIFO (first-in first-out) special file to be created. The ls -l command sends its output to FIFO. The expression is converted by Bash to an actual file name that can be used for reading.
To clear any confusion, the two statements below (already shown above) look similar, but are fundamentally very different :
grep Jan <(ls -l)
grep Jan < <(ls -l)
In the first one, grep receives a file name as an argument and reads this files. In the second case, the additional < (whitespace between the two < is important) creates a redirection that reads the FIFO and feeds its output to the standard input of grep. There is a FIFO in both cases, but it is presented in a totally different way to the command.
I think there's a fundamental misunderstanding of how Unix tools/Bash operates here.
It appears what you're trying to do here is store the output of ls in a variable (which is something you shouldn't do for other reasons) and trying to grep across the string stored inside that variable using grep.
This is not how grep works. If you look at the man page for grep, it says:
SYNOPSIS
grep [OPTIONS] PATTERN [FILE...]
grep [OPTIONS] [-e PATTERN | -f FILE] [FILE...]
DESCRIPTION
grep searches the named input FILEs for lines containing a match to
the given PATTERN. If no files are specified, or if the file “-” is
given, grep searches standard input. By default, grep prints the
matching lines.
Note that it specifically says "grep searches the named input FILEs".
Then it goes on to say "If no files are specified [...] grep searches standard input".
In other words, by definition grep does not search over strings. It searches files. Therefore you can not pass grep a string, via a bash variable.
When you type
grep Jan "$myvar"
Based on the syntax, grep thinks "Jan" is the PATTERN and the entire string in "$myvar" is a FILEname. Hence the error File name too long.
When you write
echo "$myvar" | grep Jan
What you're now doing is making bash output the contents of "$myvar" to standard output. The | (pipe operator) in bash, connects the stdout (standard output) of the echo command, to the stdin (standard input) of the grep command. As noted above, when you omit the FILEname parameter to grep, it searches for a string in it's stdin by default, which is why this works.
Grep takes as command line parameters files, not direct strings. You do indeed need the echo to make grep search in your variable.
I'm trying to remove the group and all users part from a permissions string. Such as -rwxr-xr-x and I want it to be -rwx and then take off the leading dash to make it rwx
Right now I'm getting the permissions string via this code: filePerms=$(stat --format=%A $path) where $path is just a any directory to a file.
Here my attempt at using cut to get rid of the 4th character and on filePermsTest=$(cut -c1- $filePerms) but I get this error:
cut: invalid option -- 'r'
Try 'cut --help' for more information.
59M 2014-03-21 19:25 -rw-r--r-- ./old/VMwareTools-9.6.2-1688356.tar.gz
Here's the part of the cut command I'm trying that use: N- from N'th byte, character or field, to end of line
For reference here's the normal output of my code (This doesn't have much to do with the question. It's just to give you a reference of what's going on and how this will be used):
Size Date Time Permissions File
--------------------------------------------------------
59M 2014-03-21 19:25 -rw-r--r-- ./old/VMwareTools-9.6.2-1688356.tar.gz
9.2M 2014-03-21 19:24 -rw-r--r-- ./old/vmware-tools-distrib/lib/icu/icudt44l.dat
7.6M 2013-07-07 21:21 -rwxr-xr-x ./old/Sublime Text 2/sublime_text
4.8M 2014-08-26 23:51 -rwxrwxr-x ./old/sublime_text_3/sublime_text
I think somehow part of permission string is being taken in as an argument.
Try stat --format=%A filename | cut -c 2-4.
This picks out the second to fourth characters, which are the ones you want.
Use awk's substr function to cut out some characters from a particular column.
command | awk 'NR>2{$4=substr($4,2,3)}1'
Example:
$ cat file
Size Date Time Permissions File
--------------------------------------------------------
59M 2014-03-21 19:25 -rw-r--r-- ./old/VMwareTools-9.6.2-1688356.tar.gz
9.2M 2014-03-21 19:24 -rw-r--r-- ./old/vmware-tools-distrib/lib/icu/icudt44l.dat
7.6M 2013-07-07 21:21 -rwxr-xr-x ./old/Sublime Text 2/sublime_text
4.8M 2014-08-26 23:51 -rwxrwxr-x ./old/sublime_text_3/sublime_text
$ awk 'NR>2{$4=substr($4,2,3)}1' file
Size Date Time Permissions File
--------------------------------------------------------
59M 2014-03-21 19:25 rw- ./old/VMwareTools-9.6.2-1688356.tar.gz
9.2M 2014-03-21 19:24 rw- ./old/vmware-tools-distrib/lib/icu/icudt44l.dat
7.6M 2013-07-07 21:21 rwx ./old/Sublime Text 2/sublime_text
4.8M 2014-08-26 23:51 rwx ./old/sublime_text_3/sublime_text
I have some lines in below forms:
-rw-r--r-- sten/sefan anonymous 8593 2011-12-05 18:28 8M
-rw-r--r-- sten/sefan 8593 2011-12-05 18:28 8M
How can I get the 8593 one-liner?
The lines are retrieved by performing some dry-run of archives, e.g.:
$ tar jtvf zip64support.tar.bz2
-rw-r--r-- stefan.bodewig/Domain Users 16195018 2011-10-14 21:05 100k_Files.zip
-rw-r--r-- stefan.bodewig/Domain Users 14417258 2011-10-14 21:05 100k_Files_7ZIP.zip
or:
$ tar jtvf bla.tar.bz2
-rw-r--r-- tcurdt/tcurdt 610 2007-11-14 18:19 test1.xml
-rw-r--r-- tcurdt/tcurdt 82 2007-11-14 18:19 test2.xml
Specifically to get the number in a line with YYYY-mm-dd after it.
The command you are after to get the filesizes in the current directory is
$ stat -c %s *
You do not want to use bash,awk or cut to do this and your question is a great reason why as in the first line it would be the fourth column and in the second it's the third. Parsing the output of ls is not recommended!
Edit:
Since the column is number is gaurenteed I would use grep with positive lookahead:
$ tar jtvf zip64support.tar.bz2|grep -Po '[0-9]+(?= [0-9]{4}-[0-9]{2}-[0-9]{2})'
16195018
14417258
give this a try:
tar jtvf bla.tar.bz2|awk '$0=$3'
in your question you mentioned
get the number in a line with YYYY-mm-dd after it.
if you really want to do with grep:
tar ... |grep -oP '\d+(?= \d{4}-)'