Cannot remove directories via SSH when use rm and ls - linux

I try to clean release directories when deploying code to the server, but I need to keep at least 5.
Now I use this command to do like I mentioned.
rm -rf $(ls -1t /path/to/deploy/release | tail -n +6)
Everything looks good when I ran this on the server, but It does not work when I try to run via ssh . . .
ssh user#123.456.789.100 'rm -rf $(ls -1t /path/to/deploy/release | tail -n +6)'
Anyone please help or suggest me. What am I do wrong?

Warning: like all things that involve rm -Rf and "should work", the below may actually delete all files on the computer (since ssh is involved, that's both the local and remote computers), and possibly set your kitchen on fire. I take no responsibility for damage etc... Backups (stored somewhere other than your kitchen) are always a good thing to have, and maybe you should test with rm -Ri first?
You don't say what the failure is, but one thing I notice is that the ls command will generate a list of filenames in the /path/to/deploy/release directory, but rm will not be running in that directory, so it's going to try to delete files by those names in your home directory instead of /path/to/deploy/release. This is fairly easy to fix by cding into that directory first (although be sure to write the command so if the cd fails, it won't just randomly delete files in the wrong directory).
Another problem is that the command depends on word splitting to turn the output of ls ... | tail ... into a list of filenames, which will fail if any filenames contain whitespace and/or wildcards. This is trickier to solve, so I'll just ignore the problem and hope it never blows up on you.
Anyway, with the cd fix (and the whitespace bug unfixed), here's what I get:
ssh user#123.456.789.100 'cd /path/to/deploy/release && rm -Rf $(ls -1t | tail -n +6)'
Again, testing with rm -Ri (and having good backups) is recommended.

Related

Deleting all files in a directory except the ones mentioned in a list [duplicate]

This question already has answers here:
Shell script: How to delete all files in a directory except ones listed in a file?
(2 answers)
Closed 2 years ago.
I have a directory called a00 containing 3000 files with extension .SAC. I have a text file called gd.list containing names of 88 of those 3000 files. I am trying to write a code that will delete all .SAC files except those mentioned in gd.list
How to do that using shell/bash?
The rm command is commented out so that you can check and verify that it's working as needed. Then just un-comment that line.
The check directory section will ensure you don't accidentally run the script from the wrong directory and clobber the wrong files.
You can remove the echo deleting line to run silently.
#!/bin/bash
cd /home/me/myfolder2tocleanup/
# Exit if the directory isn't found.
if (($?>0)); then
echo "Can't find work dir... exiting"
exit
fi
for i in *; do
if ! grep -qxFe "$i" filelist.txt; then
echo "Deleting: $i"
# the next line is commented out. Test it. Then uncomment to removed the files
# rm "$i"
fi
done
You can find the answer here https://askubuntu.com/questions/830776/remove-file-but-exclude-all-files-in-a-list by L. D. James
there are a few alternatives.
I'd prefer to see find -Z as it more clearly demarcates the file names:
find . -maxdepth 1 -name '*.sac' -print0 | grep -x -z -Z -f gd.list | xargs -0 echo rm
Again, test this first. Perhaps sort the output and make sure it is unique versus the original file.
For a smaller list of filenames I would recommend just using find with -and -not -name and -delete, but with a larger list that can be tricky.
You could tag the files you want to keep as read-only, then delete the wildcard with the appropriate setting in rm or find to skip read-only files. That assumes you own the read-only flag. You could tag the files as executable, and use find, if the read-only flag is not for you.
Another option would be to move the matching files to a temp folder, delete the wildcard, then move the files you want to keep back. That is assuming you can afford for the files to disappear temporarily.
To make them disappear for a shorter time, move the kept files out to a temp directory, move the original directory out, move the temp directory in, then delete the movced out directory.
If you are feeling brave, try something like
ls *.sac | fgrep -v -f gd.list | xargs echo rm
Note that I've put an echo in that xargs, just to make sure no one has a cut and paste accident.
Note also the limitations of this approach mentioned in the comments. As I said, if you are feeling brave...

Encoding filenames with encfsctl which begin with a "-" [duplicate]

Somehow, at some point, I accidentally created a file in my home directory named '-s'. It is about 500 kb and I have no idea if it contains important data or not. I cannot figure out any way to do anything with this file, because every command I use to try to view, copy, or move it interprets the filename as an argument.
I've tried putting it in quotes, escaping it with a backslash, a combination of the two, nothing seems to work.
Also, when I first posed this question to my coworkers, we puzzled over it for a while until someone finally overheard and asked "why don't you just rename it?" After I explained to him that cp and mv both think the filename is an argument so it doesn't work, he said "no, not from the command line, do it from Gnome." I sheepishly followed his advice, and it worked. HOWEVER I'm still interested in how you would solve this dilemma if you didn't have a window manager and the command line was the only option.
You can refer to it either using ./-filename or some command will allow you to put it after double dash:
rm -- -filename
You can get rid of it with:
rm ./-s
The rm command (at least under Ubuntu 10.04) even tells you such:
pax#pax-desktop:~$ rm -w
rm: invalid option -- 'w'
Try `rm ./-w' to remove the file `-w'.
Try `rm --help' for more information.
The reason that works is because rm doesn't think it's an option (since it doesn't start with -) but it's still referring to the specific file in the current directory.
You could use --, e.g.:
rm -- -file
Just for fun you could also use/abuse find.
find . -name "-s" -delete
or
find . -name "-s" -exec cat {} \;
besides using rm, if you know a language, you can also use them. They are not affected by such shell warts.
Ruby(1.9+)
$ ruby -rfileutils -e 'FileUtils.rm("-s")'
or
$ ruby -e 'File.unlink("-s")'

How can I delete a directory like this "-work.lib++" in linux? [duplicate]

Somehow, at some point, I accidentally created a file in my home directory named '-s'. It is about 500 kb and I have no idea if it contains important data or not. I cannot figure out any way to do anything with this file, because every command I use to try to view, copy, or move it interprets the filename as an argument.
I've tried putting it in quotes, escaping it with a backslash, a combination of the two, nothing seems to work.
Also, when I first posed this question to my coworkers, we puzzled over it for a while until someone finally overheard and asked "why don't you just rename it?" After I explained to him that cp and mv both think the filename is an argument so it doesn't work, he said "no, not from the command line, do it from Gnome." I sheepishly followed his advice, and it worked. HOWEVER I'm still interested in how you would solve this dilemma if you didn't have a window manager and the command line was the only option.
You can refer to it either using ./-filename or some command will allow you to put it after double dash:
rm -- -filename
You can get rid of it with:
rm ./-s
The rm command (at least under Ubuntu 10.04) even tells you such:
pax#pax-desktop:~$ rm -w
rm: invalid option -- 'w'
Try `rm ./-w' to remove the file `-w'.
Try `rm --help' for more information.
The reason that works is because rm doesn't think it's an option (since it doesn't start with -) but it's still referring to the specific file in the current directory.
You could use --, e.g.:
rm -- -file
Just for fun you could also use/abuse find.
find . -name "-s" -delete
or
find . -name "-s" -exec cat {} \;
besides using rm, if you know a language, you can also use them. They are not affected by such shell warts.
Ruby(1.9+)
$ ruby -rfileutils -e 'FileUtils.rm("-s")'
or
$ ruby -e 'File.unlink("-s")'

shell script to download latest file from FTP

I am writing shell script first time, I want to download latest create file from FTP.
I want to download latest file of specific folder. Below is my code for that. But it is downloading all the files of the folder not the latest one.
ftp -in ftp.abc.com << SCRIPTEND
user xyz xyz
binary
cd Rpts/
mget ls -t -r | tail -n 1
quit
SCRIPTEND
help me with this, please?
Try using wget or lftp utility instead, it compares file time/date and AFAIR its purpose is ftp scripting. Switch to ssh/rsync if possible, you can read a bit about lftp instead of rsync here:
https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp
Probably the easiest way is to link last version on server side to "current", and always get the file pointed. If you're not admin of the server, you need to list all files with date/time, grab the information, parse it, decide which one is newest, in the meantime state on the server can change, and you find yourself in more complicated solution than it's worth.
The point is, that "ls" sorts output in some way, and time may not be default. There are switches to sort it e.g. base on modification time, however even when server responds with OK on ls -t , you can't be sure it really supports sorting, it can just ignore all switches and always return the same list, that's why admins usually use "current" link (ln -s). If there's no "current", to make sure you have the right file, you need to parse list anyway ( ls -al ).
http://www.catb.org/esr/writings/unix-koans/shell-tools.html
Looking at the code, the line
mget ls -t -r | tail -n 1
doesn't do what you think. It actually grabs all of the output of ls -t and then tail processes the output of mget. You could replace this line with
mget $(ls -t -r | tail -n 1)
but I am not sure if ftp will support such a call...
Try using an FTP client other than ftp. For example, curlftpfs available at curlftpfs.sourceforge.net is a good candidate as it allows you to mount an FTP to a directory as if it is a local folder and then run different commands on the files there (including find, grep, etc.). Take a look at this article.
This way, since the output comes form a local command, you'd be more certain that ls -t returns a properly sorted list.
Btw, it's a bit less convoluted to use ls -t | head -1 than ls -t -r | tail -1. They produce the same result but why reverse and grab from the tail when you can just grab the head :)
If you use curlftpfs then your script would be something like this (assuming server ftp.abc.com and user xyz with password xyz).
mkdir /tmp/ftpsession
curlftpfs ftp://xyz:xyz#ftp.abc.com /tmp/ftpsession
cd /tmp/ftpsession/Rpts
cp -Rpf $(ls -t | head -1) /your/destination/folder/or/file
cd -
umount /tmp/ftpsession
My Solution is this:
curl 'ftp://server.de/dir/'$(curl 'ftp://server.de/dir/' 2>/dev/null | tail -1 | awk '{print $(NF)}')

How to delete multiple files at once in Bash on Linux?

I have this list of files on a Linux server:
abc.log.2012-03-14
abc.log.2012-03-27
abc.log.2012-03-28
abc.log.2012-03-29
abc.log.2012-03-30
abc.log.2012-04-02
abc.log.2012-04-04
abc.log.2012-04-05
abc.log.2012-04-09
abc.log.2012-04-10
I've been deleting selected log files one by one, using the command rm -rf see below:
rm -rf abc.log.2012-03-14
rm -rf abc.log.2012-03-27
rm -rf abc.log.2012-03-28
Is there another way, so that I can delete the selected files at once?
Bash supports all sorts of wildcards and expansions.
Your exact case would be handled by brace expansion, like so:
$ rm -rf abc.log.2012-03-{14,27,28}
The above would expand to a single command with all three arguments, and be equivalent to typing:
$ rm -rf abc.log.2012-03-14 abc.log.2012-03-27 abc.log.2012-03-28
It's important to note that this expansion is done by the shell, before rm is even loaded.
Use a wildcard (*) to match multiple files.
For example, the command below will delete all files with names beginning with abc.log.2012-03-.
rm -f abc.log.2012-03-*
I'd recommend running ls abc.log.2012-03-* to list the files so that you can see what you are going to delete before running the rm command.
For more details see the Bash man page on filename expansion.
If you want to delete all files whose names match a particular form, a wildcard (glob pattern) is the most straightforward solution. Some examples:
$ rm -f abc.log.* # Remove them all
$ rm -f abc.log.2012* # Remove all logs from 2012
$ rm -f abc.log.2012-0[123]* # Remove all files from the first quarter of 2012
Regular expressions are more powerful than wildcards; you can feed the output of grep to rm -f. For example, if some of the file names start with "abc.log" and some with "ABC.log", grep lets you do a case-insensitive match:
$ rm -f $(ls | grep -i '^abc\.log\.')
This will cause problems if any of the file names contain funny characters, including spaces. Be careful.
When I do this, I run the ls | grep ... command first and check that it produces the output I want -- especially if I'm using rm -f:
$ ls | grep -i '^abc\.log\.'
(check that the list is correct)
$ rm -f $(!!)
where !! expands to the previous command. Or I can type up-arrow or Ctrl-P and edit the previous line to add the rm -f command.
This assumes you're using the bash shell. Some other shells, particularly csh and tcsh and some older sh-derived shells, may not support the $(...) syntax. You can use the equivalent backtick syntax:
$ rm -f `ls | grep -i '^abc\.log\.'`
The $(...) syntax is easier to read, and if you're really ambitious it can be nested.
Finally, if the subset of files you want to delete can't be easily expressed with a regular expression, a trick I often use is to list the files to a temporary text file, then edit it:
$ ls > list
$ vi list # Use your favorite text editor
I can then edit the list file manually, leaving only the files I want to remove, and then:
$ rm -f $(<list)
or
$ rm -f `cat list`
(Again, this assumes none of the file names contain funny characters, particularly spaces.)
Or, when editing the list file, I can add rm -f to the beginning of each line and then:
$ . ./list
or
$ source ./list
Editing the file is also an opportunity to add quotes where necessary, for example changing rm -f foo bar to rm -f 'foo bar' .
Just use multiline selection in sublime to combine all of the files into a single line and add a space between each file name and then add rm at the beginning of the list. This is mostly useful when there isn't a pattern in the filenames you want to delete.
[$]> rm abc.log.2012-03-14 abc.log.2012-03-27 abc.log.2012-03-28 abc.log.2012-03-29 abc.log.2012-03-30 abc.log.2012-04-02 abc.log.2012-04-04 abc.log.2012-04-05 abc.log.2012-04-09 abc.log.2012-04-10
A wild card would work nicely for this, although to be safe it would be best to make the use of the wild card as minimal as possible, so something along the lines of this:
rm -rf abc.log.2012-*
Although from the looks of it, are those just single files? The recursive option should not be necessary if none of those items are directories, so best to not use that, just for safety.
I am not a linux guru, but I believe you want to pipe your list of output files to xargs rm -rf. I have used something like this in the past with good results. Test on a sample directory first!
EDIT - I might have misunderstood, based on the other answers that are appearing. If you can use wildcards, great. I assumed that your original list that you displayed was generated by a program to give you your "selection", so I thought piping to xargs would be the way to go.
if you want to delete all files that belong to a directory at once.
For example:
your Directory name is "log" and "log" directory include abc.log.2012-03-14, abc.log.2012-03-15,... etc files. You have to be above the log directory and:
rm -rf /log/*

Resources