In colab, I have unzip a file, but now there is too much files in the directory according to the colab's setup. Is there a command line to remove the last x files of a directory?
I know I can remove all the files from this repository with rm -rf *, but I just want to remove for instance the last 100 files of the repository.
Try globing or better REGEX.
The most easy way is with globing you use the star * and some differentiation example: rm *.txt # will delete all files that end with .txt or rm document*.local # will delete all files which start with document and end with .local
The better wey is searching files by attribut and executing command on the result but is a bit complex to explain so check this out.
https://www.cyberciti.biz/faq/linux-unix-how-to-find-and-remove-files/
Using a shell array and parameter expansion:
all_files=(*)
printf '%s\n' "${all_files[#]: -100}" | nl
#rm "${all_files[#]: -100}"
Uncomment the last line if it looks like the correct list of files to delete.
The space between the colon and the minus sign is required to disambiguate from another form of parameter expansion.
Ref: 3.5.3 Shell Parameter Expansion
Related
I have a folder /home/user/Document/filepath where I have three files namely file1-1.1.0.txt, file2-1.1.1.txt, file3-1.1.2.txt
and another folder named /home/user/Document/backuppath where I have to move files from /home/user/Document/folderpath which has file1-1.0.0.txt, file2-1.0.1.txt and file3-1.0.2.txt
task is to copy the specific files from folder path to backup path.
To summarize:
the below is the files.txt where I listed the files which has to be copied:
file1-*.txt
file2-*.txt
The below is the move.sh script that execute the movements
for file in `cat files.txt`; do cp "/home/user/Document/folderpath/$file" "/home/user/Documents/backuppath/" ; done
for the above script I am getting the error like
cp: cannot stat '/home/user/Document/folderpath/file1-*.txt': No such file or directory found
cp: cannot stat '/home/user/Document/folderpath/file2-*.txt': No such file or directory found
what I would like to accomplish is that I would like to use the script to copy specific files using * in the place of version numbers., since the version number may vary in the future.
You have wildcard characters in your files.txt. In your cp command, you are using quotes. These quotes prevent the wildcards to be expanded, as you can clearly see from the error message.
One obvious possibility is to not use quotes:
cp /home/user/Document/folderpath/$file /home/user/Documents/backuppath/
Or not use a loop at all:
cp $(<files.txt) /home/user/Documents/backuppath/
However, this would of course break if one line in your files.txt is a filename pattern which contains white spaces. Therefore, I would recommend a second loop over the expanded pattern:
while read file # Puts the next line into 'file'
do
for f in $file # This expands the pattern in 'file'
do
cp "/home/user/Document/folderpath/$f" /home/user/Documents/backuppath
done
done < files.txt
How can I pass each one of my repository files and to do something with them?
For instance, I want to make a script:
#!/bin/bash
cd /myself
#for-loop that will select one by one all the files in /myself
#for each X file I will do this:
tar -cvfz X.tar.gz /myself2
So a for loop in bash is similar to python's model (or maybe the other way around?).
The model goes "for instance in list":
for some_instance in "${MY_ARRAY[#]}"; do
echo "doing something with $some_instance"
done
To get a list of files in a directory, the quick and dirty way is to parse the output of ls and slurp it into an array, a-la array=($(ls))
To quick explain what's going on here to the best of my knowledge, assigning a variable to a space-delimited string surrounded with parens splits the string and turns it into a list.
Downside of parsing ls is that it doesn't take into account files with spaces in their names. For that, I'll leave you with a link to turning a directory's contents into an array, the same place I lovingly :) ripped off the original array=($(ls -d */)) command.
you can use while loop, as it will take care of whole lines that include spaces as well:
#!/bin/bash
cd /myself
ls|while read f
do
tar -cvfz "$f.tar.gz" "$f"
done
you can try this way also.
for i in $(ls /myself/*)
do
tar -cvfz $f.tar.gz /myfile2
done
I have this structure:
release/folder1/file1
release/folder2/file2
...
release/folderN/fileN
I want to include all those folders (folder1, folder2 ... folderN) in a tar file.
The key is that I want these folders to be in the final tar within another directory named MYAPP so when you open the tar you can see this:
MYAPP/folder1/file1
MYAPP/folder2/file2
...
MYAPP/folderN/fileN
How can I achieve this without renaming the original "release" directory and/or creating new directories.
Is this possible to achive just in the tar process?
Thanks
Add
--transform=s#^release/#MYAPP/#
to your tar command line.
The argument of the --transform command line is a command that is passed to sed together with the file path before it is stored in the archive (use tar -tf to show the names of the files stored in the archive).
The command s#^release/#MYAPP/# tells sed to search (s) release/ at the beginning of the string (^) and replace it with MYAPP/.
The / at the end of the search and replace strings is needed to be sure the complete name of the component is release (to not replace release.txt). The # character is just a regex delimiter. Usually / is used as a regex delimiter but we prefer to use a different delimiter here to avoid the need to escape / (because it is used in the search and replace strings).
Read more in the documentation of tar and sed.
In command line, How can we recursively find out all the zip files in a directory and its sub directories and keep only the latest modified 5 files and delete the remaining.
The files paths would be something like below:
basedirectory/2015/12/18/abc.zip
basedirectory/2015/12/18/def.zip
basedirectory/2015/12/18/ghi.zip
basedirectory/2015/12/18/jkl.zip
basedirectory/2015/12/08/mno.zip
basedirectory/2015/12/08/pqr.zip
basedirectory/2015/12/08/stu.zip
basedirectory/2015/12/07/stu.zip
I have a way, but it involves several (easy) steps. There are probably more elegant ways of doing this, but here is how I know how. They come from a couple sources, which I list at the end of my answer. You will use the already installed utilites cd, find, ls, rm and head. it will involve a creating and executing two bash scripts.
Open a terminal and change into your base directory with cd ~/basedirectory
This sets up the following commands. It is important that you stay in this directory for the rest of the commands.
Type findpwd-name *.zip > find_zip
This creates a list of all the zip files with the full path relative to the directory you changed in to. Instead of printing them to the screen, it writes them to a find_zip file in the directory you changed into.
type cp find_zip remove_old_zip
This creates a second, duplicate file that you will later use to delete the old files.
Open the find_zip file in your favorite text editor. If you're not used to using any, you can use gedit. If you don't have it, install it with sudo apt-get udpate && sudo apt-get install gedit
Do a search and replace as follows (in gedit): search for \n , and replace it with " \\n"
This places the list of folders within quotes. the first backslash places a "\" at the end of each line, which means continue reading the next line and execute all the code together. The \n preserves the line endings. The last " puts a quote at the beginning of each line. You need the quotes to escape special characters like ' and ( that may be in your file name.
Create 2 new lines at the top of the file and type:
!/bin/bash
ls -lt \
The first line turns your file into a bash script. The second line will list all the files you found with the find command and order them by date.
Create a new line at the bottom of your file and type: | head -5. Save and exit the file.
| is a "pipe" that will take the output of the ordered file list that ls creates and feed it into the head command. The head command will list just the 5 most recently modified files and display or print them on your screen.
As a result of steps 5-7, your file should go from looking like this:
basedirectory/2015/12/18/abc.zip
basedirectory/2015/12/18/def.zip
basedirectory/2015/12/18/ghi.zip
basedirectory/2015/12/18/jkl.zip
basedirectory/2015/12/08/mno.zip
basedirectory/2015/12/08/pqr.zip
basedirectory/2015/12/08/stu.zip
basedirectory/2015/12/07/stu.zip
to this:
#!/bin/bash
ls -lt \
basedirectory/2015/12/18/abc.zip \
basedirectory/2015/12/18/def.zip \
basedirectory/2015/12/18/ghi.zip \
basedirectory/2015/12/18/jkl.zip \
basedirectory/2015/12/08/mno.zip \
basedirectory/2015/12/08/pqr.zip \
basedirectory/2015/12/08/stu.zip \
basedirectory/2015/12/07/stu.zip \
| head -5
Type bash find_zip into in the terminal. With your newfound list of the 5 most recent files, open up the remove_old_zip file created in step 3.
You will also be turning this file into a bash script, but it will remove all but the five newest files.
Delete the lines in the remove_old_zip file containing the 5 files you want to keep.
Do a search and replace as follows (in gedit): search for \n , and replace it with " \\n"
This is the same as step 5.
Create 2 new lines at the top of the file and type:
!/bin/bash
rm \
This is similar to step 6 except that rm will delete the files still listed.
remove the final \ on the final line of the remove_old_zip file. Save and exit.
Type bash remove_old_zip.
Type rm find_zip remove_old_zip.
This remove the two scripts, which are now useless since the files have been deleted.
sources:
How can I list (ls) the 5 last modified files in a directory?
http://www.geekinterview.com/talk/758-how-to-continue-to-next-line.html
List files recursively in Linux CLI with path relative to the current directory
i have a directory with a lot of subdirectories with a # infront of them:
#adhasdk
#ad18237
I want to rename them all and remove the # caracter
I tried to do:
rename -n `s/#//g` *
but didn't seem to work.
-bash: s/#//g: No such file or directory
Any ideas on this.
Thanks
Just use
$ rename 's/^#//' *
use -n just to check that what you think it would happen really happens.
In you example you have the clue about the wrong quotes used (backticks) in the error message
-bash: s/#//g: No such file or directory
bash is trying to execute a command named s/#//g.
No that using g (global) and not anchoring the regular expression you will replace any #, not just the one in the first position.
I don't know whether it's just a typo when you typed it here, but that "rename" command should work if:
you leave off the "-n" and
you quote the substitution with regular single-quotes and not back-quotes
The "-n" tells it to not really do anything. The back-quotes are just wrong (they mean something but not what you want here).
The problem is that you use backticks (`). You should use normal quotes:
rename -n 's/#//g' *
for DIR in \#*/
do
echo mv "$DIR" "${DIR/#\#/}"
done
I had to rename all folders inside a given folder. Each folder name had some text inside round braces. The following command removed the round braces from all folder names:
rename 's/(.+)//' *
Some distros doesn't support regexp in rename. You have to install prename. Even more, sometimes you can't install prename and you have to install gprename to have binary prename.
If you have 'prename' then just change backtick character " ` " to single quote and everything should work.
So the solution should be:
prename -n 's/#//g' *
or
prename -n 'y/#//' *