Can't SSH to keep latest 5 folders and delete the older folders [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Currently I am using ls -1t | tail -n +6 | xargs rm -rf and it works fine at the server itself. But when I try it through ssh using root in a bash script, it doesn't run/work.
This is the line I am using : ssh -q -oStrictHostKeyChecking=no -oConnectTimeout=1 root#$host "sudo cd /path/to/folder && sudo ls -1t | tail -n +6 | xargs rm -rf"
May I know what's the issue here?

root#$host suggests that you're already logged in as root, so using sudo is redundant here.
cd /path/to/folder && ls -1t | tail -n +6 | xargs rm -rf
should do the trick.
But this is only safe if you exactly know that /path/to/folder can not contain any files with possibly dangerous characters in their names. For example a file named ..\n or similar would cause the whole directory to be deleted.
The reason your original example does not work is that sudo executes a program, not a series of shell commands. Also cd is not a program but a shell builtin, so it can't be executed through sudo, as this wouldn't really make sense, the directory change would be lost after cd returned. If that worked, then in your case the first statement (sudo cd /path/to/folder) would execute successfully, and then the second one (sudo ls -1t | tail -n +6 | xargs rm -rf) would execute in the current directory, but only the ls command as root, the rest as the current user.
To execute the whole command line through sudo
sudo sh -c "cd /path/to/folder && ls -1t | tail -n +6 | xargs rm -rf"
Or, if the current user has access rights for /path/to/folder, then actually only the last part needs to be executed as root:
cd /path/to/folder && ls -1t | tail -n +6 | sudo xargs rm -rf

Related

In shell, how to remove file if bigger than 100MB, move otherwise [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
What would be the easiest way to do the following with shell commands?
Pseudo code: rm abc if >100MB else mv abc /tmp
abc could either be a name of ONE file or ONE directory.
I want to have an alias that if run on a file or directory, it removes it if it's size is greater than 100MB and if not, it moves it to another directory.
I know I could accomplish something similar with a whole fuction, but there must be a slick one-liner that could do the same.
To move a single regular file if its size is lower than 100MB and delete it otherwise, you can use the following commands :
# 104857600 = 1024 * 1024 * 100 = 100M
[ $(stat --printf '%s' "$file") -gt 104857600 ] && rm "$file" || mv "$file" /tmp/
To move a single directory and its content if its combined size is lower than 100MB and delete it otherwise, you can use the following commands :
[ $(du -s "$directory" | cut -f1) -gt 104857600 ] && rm -rf "$directory" || mv "$directory" /tmp/
To do one or the other depending on whether the input parameter points to a file or a directory, you can use if [ -d "$path" ]; then <directory pipeline>; else <file pipeline>; fi.
To recursively move or delete all the files of a directory depending on their size you can use the following :
find . -type f -a \( -size +100M -exec rm {} + -o -exec mv -t /tmp/ {} + \)
It first selects files in the current directory, then execute rm ... with the list of those files whose size is greater than 100M and mv ... /tmp with the rest of the files.
This is possible by a combination of find command and xargs / rm statement and rsync and the order in which you perform this in the script.
Like i.e.:
find /foo/bar -type f -size +100M -print | xargs rm
Where the pipeline through xargs is for efficiency, so that the rm command is not being executed for each file which has been found by find.
And next an rsync statement to mirror a hierarchie of remaining files (from your question its not 100% clear to me whether there are subdirectories of other files or not, therefore I propose rsync which also would cover a subdir hirarchie) to a different path and using the rsync command line option
--remove-source-files
rsync -av --remove source files /foo/bar /tmp
Conclusion: this combination of find / rsync in the proper order works much more efficient compared to other proposed solutions with "find .. -exec" where the executed program would be forked as often as a file has been found. The more advanced Unix Admins avoid "find ... -exec" not to waste valuable system resources and because it scales much better when having a lot of files.

Find in file and then move that file using Linux? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I want to be able to find files that contain certain strings and the move that list of files to directory X
I can use this command to find the files
find . -iname 'commaus*' | xargs grep '>24901<' -sl
and this command to move files
mv * /home/user/scripts/xslt
But is there a way to combine these commands so that the found files are moved.
I have seen similar joined find/action commands such as
find /home/user -name property_images -ok rm -f {} \;
but following this structure is returning an error
find . -iname 'commaus*' | xargs grep '>24901<' -sl -ok mv {} /home/user/scripts/xslt;
Use a loop. In this case, try:
for i in `find . -iname 'commaus*' | xargs grep '>24901<' -sl`; do mv "$i" /home/user/scripts/xslt/; done
Very hackish, but it should work.
you can do this by wrapping it in a for loop
for i in `find /path/to/search -iname 'optionalfilename' -exec grep -H -m1 '>24901<' {} \; | awk -F: '{print $1}'
do
mv $i /path/to/new/location
done
This will not work as expected if filenames contain spaces or colons
Also might be able to try (without loop):
find . -iname 'commaus*' | grep '>24901<' -sl -ok | xargs mv -t /home/user/scripts/xslt

How to show a 'grep' result with the complete path or file name [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 10 months ago.
Improve this question
How can I get the complete file path when I use grep?
I use commands like
cat *.log | grep somethingtosearch
I need to show the result with the complete file path from where the matched result were taken out.
How can I do it?
Assuming you have two log-files in:
C:/temp/my.log
C:/temp/alsoMy.log
'cd' to C: and use:
grep -r somethingtosearch temp/*.log
It will give you a list like:
temp/my.log:somethingtosearch
temp/alsoMy.log:somethingtosearch1
temp/alsoMy.log:somethingtosearch2
I think the real solution is:
cat *.log | grep -H somethingtosearch
Command:
grep -rl --include="*.js" "searchString" ${PWD}
Returned output:
/root/test/bas.js
If you want to see the full paths, I would recommend to cd to the top directory (of your drive if using Windows)
cd C:\
grep -r somethingtosearch C:\Users\Ozzesh\temp
Or on Linux:
cd /
grep -r somethingtosearch ~/temp
If you really resist on your file name filtering (*.log) and you want recursive (files are not all in the same directory), combining find and grep is the most flexible way:
cd /
find ~/temp -iname '*.log' -type f -exec grep somethingtosearch '{}' \;
It is similar to BVB Media's answer.
grep -rnw 'blablabla' `pwd`
It works fine on my Ubuntu 16.04 (Xenial Xerus) Bash.
For me
grep -b "searchsomething" *.log
worked as I wanted
This works when searching files in all directories.
sudo ls -R | grep -i something_bla_bla
The output shows all files and directories which include "something_bla_bla". The directories with path, but not the files.
Then use locate on the wanted file.
The easiest way to print full paths is to replace the relative start path with the absolute path:
grep -r --include="*.sh" "pattern" ${PWD}
Use:
grep somethingtosearch *.log
and the filenames will be printed out along with the matches.

How do I use find to copy and remove extensions keeping the same subdirectory structure [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I'm trying to copy all the files from one directory to another, removing all file extensions at the same time.
From directory 0001:
0001/a/1.jpg
0001/b/2.txt
To directory 0002:
0002/a/1
0002/b/2
I've tried several find ... | xargs c...p with no luck.
Recursive copies are really easy to to with tar. In your case:
tar -C 0001 -cf - --transform 's/\(.\+\)\.[^.]\+$/\1/' . |
tar -C 0002 -xf -
If you haven't tar with --transform this can works:
TRG=/target/some/where
SRC=/my/source/dir
cd "$SRC"
find . -type f -name \*.\* -printf "mkdir -p '$TRG/%h' && cp '%p' '$TRG/%p'\n" |\
sed 's:/\.::;s:/./:/:' |\
xargs -I% sh -c "%"
No spaces after the \, need simple end of line, or you can join it to one line like:
find . -type f -name \*.\* -printf "mkdir -p '$TRG/%h' && cp '%p' '$TRG/%p'\n" | sed 's:/\.::;s:/./:/:' | xargs -I% sh -c "%"
Explanation:
the find will find all plain files what have extensions in you SRC (source) directory
the find's printf will prepare the needed shell commands:
command for create the needed directory tree at the TRG (target dir)
command for copying
the sed doing some cosmetic path cleaning, (like correcting /some/path/./other/dir)
the xargs will take the whole line
and execute the prepared commands with shell
But, it will be much better:
simply make an exact copy in 1st step
rename files in 2nd step
easier, cleaner and FASTER (don't need checking/creating the target subdirs)!
Here's some find + bash + install that will do the trick:
for src in `find 0001 -type f` # for all files in 0001...
do
dst=${src/#0001/0002} # match and change beginning of string
dst=${dst%.*} # strip extension
install -D $src $dst # copy to dst, creating directories as necessary
done
This will change the permission mode of all copied files to rwxr-xr-x by default, changeable with install's --mode option.
I came up with this ugly duckling:
find 0001 -type d | sed 's/^0001/0002/g' | xargs mkdir
find 0001 -type f | sed 's/^0001//g' | awk -F '.' '{printf "cp -p 0001%s 0002%s\n", $0, $1}' | sh
The first line creates the directory tree, and the second line copies the files. Problems with this are:
There is only handling for directories and regular files (no
symbolic links etc.)
If there are any periods (besides the
extension) or special characters (spaces, etc.) in the filenames
then the second command won't work.

"rm" (delete) 8 million files in a directory? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I have 8 million files in my /tmp and I need to remove them. This server is also running pretty important app and I can not overload it.
$ ls | grep .| xargs rm
The above makes my app unresponsive.
Do you have any ideas how to remove these files? Thanks in advance!
Well yes, don't use ls (because it may sort files, and the file list may draw more memory than you would like), don't add pointless indirections like a pipe, or xargs.
find . -type f -delete
grep . is match anything, including nothing.
Cut it out of your chain to remove a process launched for each file. That should speed things up nicely.
ls | xargs rm -rf
Note that this will choke on whitespace, so an improvement is
ls | xargs -I{} rm -v {}
Of course, a much faster method is to remove the directory and recreate it. However, you do need to take care that your script doesn't get "lost" in the directory tree and remove stuff it shouldn't.
rm -rf dir
mkdir dir
Note that there are some subtle differences between removing all files, and removing and recreating the directory. Removing all files will only remove visible files and directories; while removing the directory and recreating will remove all files and directories, visible and hidden.
try this:
ls -1 | grep -v -e "ignoreFile" -e "ignoreFile2" | xargs rm -rf
ls -1 is simplifying ls | grep .
grep -v will remove lines from the list. just give it any files that should not be deleted, separating patterns with -e flag
And just for a complete explaination:
(I'm guessing this is already known)
rm -rf :
-r recursive
-f force

Resources