Backup files on webserver ! and ~ - linux

My LAMP web server renders backup files like these:
!index.php
!~index.php
bak.index.php
Copy%20of%20index.php
I tried deleting with rm but it cannot find the files.
Does this have something to do with bash or vim? How can this be fixed?

Escape the characters (with a backslash) like so:
[ 09:55 jon#hozbox.com ~/t ]$ ll
total 0
-rw-r--r-- 1 jon people 0 Nov 27 09:55 !abc.html
-rw-r--r-- 1 jon people 0 Nov 27 09:55 ~qwerty.php
[ 09:55 jon#hozbox.com ~/t ]$ rm -v \!abc.html \~qwerty.php
removed '!abc.html'
removed '~qwerty.php'
[ 09:56 jon#hozbox.com ~/t ]$ ll
total 0
[ 09:56 jon#hozbox.com ~/t ]$

Another way to do that, other than the one suggested by chown, is write the filenames within "".
Example:
rm "!abc.html" "~qwerty.php"

If you don't like the special treatment of the character !, use set +H in your shell to turn of history expansion. See section 'HISTORY EXPANSION' in man bash for more information.
Interestingly, I can delete files starting with ~ without having to escape the file names.

Related

rename first part of the filename in linux?

I have lot of files starting with processConfig-. I wanted to rename it to processCfg-. What's the easy way to change the first part of file name to processCfg- in linux?
But I don't want to rename this file processConfig.json since it doesn't match with my prefix.
> ls -lrth
total 467
-rw-r--r-- 1 david staff 9.8K May 26 15:14 processConfig-data-1234.json
-rw-r--r-- 1 david staff 11K May 26 15:14 processConfig-data-8762.json
-rw-r--r-- 1 david staff 4.9K May 26 15:14 processConfig-dataHold-1.json
-rw-r--r-- 1 david staff 6.6K May 26 15:14 processConfig-letter.json
-rw-r--r-- 1 david staff 5.6K May 26 16:44 processConfig-data-90987.json
-rw-r--r-- 1 david staff 284K May 28 18:44 processConfig.json
Like this :
rename -n 's/^processConfig-/processCfg-/' processConfig-*.json
Remove -n switch when the output looks good to rename for real.
man rename
There are other tools with the same name which may or may not be able to do this, so be careful.
The rename command that is part of the util-linux package, won't.
If you run the following command (GNU)
$ file "$(readlink -f "$(type -p rename)")"
and you have a result that contains Perl script, ASCII text executable and not containing ELF, then this seems to be the right tool =)
If not, to make it the default (usually already the case) on Debian and derivative like Ubuntu :
$ sudo apt install rename
$ sudo update-alternatives --set rename /usr/bin/file-rename
If you don't have this command with another distro, search your package manager to install it or do it manually (no deps...)
This tool was originally written by Larry Wall, the Perl's dad.

Script that calls another script to execute on every file in a directory

There are two directories that contains these files:
First one /usr/local/nagios/etc/hosts
[root#localhost hosts]$ ll
total 12
-rw-rw-r-- 1 apache nagios 1236 Feb 7 10:10 10.80.12.53.cfg
-rw-rw-r-- 1 apache nagios 1064 Feb 27 22:47 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1063 Feb 22 12:02 localhost.cfg
And the second one /usr/local/nagios/etc/services
[root#localhost services]$ ll
total 20
-rw-rw-r-- 1 apache nagios 2183 Feb 27 22:48 10.80.12.62.cfg
-rw-rw-r-- 1 apache nagios 1339 Feb 13 10:47 Check usage _etc.cfg
-rw-rw-r-- 1 apache nagios 7874 Feb 22 11:59 localhost.cfg
And I have a script that goes through file in Hosts directory and paste some lines from that file in the file in the Services directory.
The script is ran like this:
./nagios-contacts.sh /usr/local/nagios/etc/hosts/10.80.12.62.cfg /usr/local/nagios/etc/services/10.80.12.62.cfg
How can I achieve that another script calls my script and goes through every file in the Hosts directory and does its job for the files with the same name in the Service directory?
In my script I´m pulling out contacts from the 10.80.12.62.cfg in the Hosts directory and appending them to the file with the same name in the Service directory.
Don't use ls output as an input to for loop instead use the built-in wild-cards. See why it's not a good idea.
for f in /usr/local/nagios/etc/hosts/*.cfg
do
basef=$(basename "$f")
./nagios-contacts.sh "$f" "/usr/local/nagios/etc/services/${basef}"
done
It sounds like you just need to do some iteration.
echo $(pwd)
for file in $(ls); do ./nagious-contacts.sh $file; done;
So it will loop over all files in the current directory.
You can also modify it as well by doing something more absolute.
abspath=$1
for file in $(ls $abspath); do ./nagious-contacts.sh $abspath/$file; done
which would loop over all files in a set directory, and then pass the abspath/filename into your script.

Permissions of files within subfolders with SGID in linux

I have a file server based on Ubuntu 14.04. Many users do exist in it where each user belongs to its own group (such as lucas:lucas) but also to a common group called "sambashared".
lucas#arturito:~$ cat /etc/group | grep lucas
adm:x:4:lucas,syslog
lp:x:7:saned,lucas
cdrom:x:24:lucas
sudo:x:27:lucas
dip:x:30:lucas
plugdev:x:46:lucas
lucas:x:1000:
lpadmin:x:111:lucas
sambashare:x:112:lucas
There is also a shared folder under the /home: /home/share. Such a folder has the SGID bit enabled, so files created under it will belong to the "sambashare" group:
lucas#arturito:/home$ ls -l | grep samba
drwxrwsr-x 10 share sambashare 4096 Apr 24 13:44 share
lucas#arturito:/home/share$ touch test.text
lucas#arturito:/home/share$ ls -l test.text
-rw-rw-r-- 1 lucas sambashare 0 Apr 24 14:02 test.text
So, as seen before, the files being created under /home/share are created fine (lucas:sambashare). The issue I'm having is for files being created on a deeper subfolder of /home/share:
lucas#arturito:/home/share/99_varios$ touch file.txt
lucas#arturito:/home/share/99_varios$ ls -l | grep file.txt
-rw-rw-r-- 1 lucas lucas 0 Apr 24 14:19 file.txt
If you see before, the file file.txt belongs to lucas:lucas, but I was hoping it to be lucas:sambashare.
Any idea on how to solve this? Or, is it solvable?
Thanks in advance,
Lucas
Possible workaround is to use facl (file access control lists).
setfacl -m default:group:sambashare:rw /home/samba
All new files in /home/samba and its sub-folders will be owned by lucas:lucas but sambashare group will have read and write permission.
More in getfact (1) and setfact (1).

Using RSync to copy a sequential range of files

Sorry if this makes no sense, but I will try to give all the information needed!
I would like to use rsync to copy a range of sequentially numbered files from one folder to another.
I am archiving a DCDM (Its a film thing) and it contains in the order of 600,000 individually numbered, sequential .tif image files (~10mb ea.).
I need to break this up to properly archive onto LTO6 tapes. And I would like to use rsync to prep the folders such that my simple bash .sh file can automate the various folders and files that I want to back up to tape.
The command I normally use when running rsync is:
sudo rsync -rvhW --progress --size only <src> <dest>
I use sudo if needed, and I always test the outcome first with --dry-run
The only way I’ve got anything to work (without kicking out errors) is by using the * wildcard. However, this only does files with the set pattern (eg. 01* will only move files from the range 010000 - 019999) and I would have to repeat for 02, 03, 04 etc..
I've looked on the internet, and am struggling to find an answer that works.
This might not be possible, and with 600,000 .tif files, I can't write an exclude for each one!
Any thoughts as to how (if at all) this could be done?
Owen.
You can check for the file name starting with a digit by using pattern matching:
for file in [0-9]*; do
# do something to $file name that starts with digit
done
Or, you could enable the extglob option and loop over all file names that contain only digits. This could eliminate any potential unwanted files that start with a digit but contain non-digits after the first character.
shopt -s extglob
for file in +([0-9]); do
# do something to $file name that contains only digits
done
+([0-9]) expands to one or more occurrence of a digit
Update:
Based on the file name pattern in your recent comment:
shopt -s extglob
for file in legendary_dcdm_3d+([0-9]).tif; do
# do something to $file
done
Globing is the feature of the shell to expand a wildcard to a list of matching file names. You have already used it in your question.
For the following explanations, I will assume we are in a directory with the following files:
$ ls -l
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 file.txt
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 funny_cat.jpg
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-2.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-3.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2013-4.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-1.pdf
-rw-r----- 1 5gon12eder staff 0 Sep 8 17:26 report_2014-2.pdf
The most simple case is to match all files. The following makes for a poor man's ls.
$ echo *
file.txt funny_cat.jpg report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf report_2014-1.pdf report_2014-2.pdf
If we want to match all reports from 2013, we can narrow the match:
$ echo report_2013-*.pdf
report_2013-1.pdf report_2013-2.pdf report_2013-3.pdf report_2013-4.pdf
We could, for example, have left out the .pdf part but I like to be as specific as possible.
You have already come up with a solution to use this for selecting a range of numbered files. For example, we can match reports by quater:
$ for q in 1 2 3 4; do echo "$q. quater: " report_*-$q.pdf; done
1. quater: report_2013-1.pdf report_2014-1.pdf
2. quater: report_2013-2.pdf report_2014-2.pdf
3. quater: report_2013-3.pdf
4. quater: report_2013-4.pdf
If we are to lazy to type 1 2 3 4, we could have used $(seq 4) instead. This invokes the program seq with argument 4 and substitutes its output (1 2 3 4 in this case).
Now back to your problem: If you want chunk sizes that are a power of 10, you should be able to extend the above example to fit your needs.
old question i know, but someone may find this useful. the above examples for expanding a range also work with rsync. for example to copy files starting with a, b and c but not d and e from dir /tmp/from_here to dir /tmp/to_here:
$ rsync -avv /tmp/from_here/[a-c]* /tmp/to_here
sending incremental file list
delta-transmission disabled for local transfer or --whole-file
alice/
bob/
cedric/
total: matches=0 hash_hits=0 false_alarms=0 data=0
sent 89 bytes received 24 bytes 226.00 bytes/sec
total size is 0 speedup is 0.00
If you are writing to LTO6 tapes, you should consider including "--inplace" to your command. Inplace is meant for writing to linear filesystems such as LTO

Why doesn't grep work if a file is not specified?

I have some problem with the Linux grep command, it don't work !!!
I am trying the following test on my Ubuntu system:
I have create the following folder: /home/andrea/Scrivania/prova
Inside this folder I have created a txt file named prova.txt and inside this file I have write the string test and I have save it
In the shell I have first access the folder /home/andrea/Scrivania/prova and so I have launched the grep command in the following way:
~/Scrivania/prova$ grep test
The problem is that the cursor continues to blink endlessly and cannot find NOTHING! Why? What is the problem?
You've not provided files for the grep command to scan
grep "test" *
or for recursive
grep -r "test" *
Because grep searches standard input if no files are given. Try this.
grep test *
You are not running the command you were looking for.
grep test * will look for test in all files in your current directory.
grep test prova.txt will look for test specifically in prova.txt
(grep test will grep the test string in stdin, and will not return until EOF.)
You need to pipe in something to grep - you cant just call grep test without any other arguments as it is actually doing nothing. try grep test *
Another use for grep is to pipe in a command
e.g. This is my home directory:
drwx------+ 3 oliver staff 102 12 Nov 21:57 Desktop
drwx------+ 10 oliver staff 340 17 Nov 18:34 Documents
drwx------+ 17 oliver staff 578 20 Nov 18:57 Downloads
drwx------# 12 oliver staff 408 13 Nov 20:53 Dropbox
drwx------# 52 oliver staff 1768 11 Nov 12:05 Library
drwx------+ 3 oliver staff 102 12 Nov 21:57 Movies
drwx------+ 5 oliver staff 170 17 Nov 10:40 Music
drwx------+ 3 oliver staff 102 20 Nov 19:17 Pictures
drwxr-xr-x+ 4 oliver staff 136 12 Nov 21:57 Public
If i run
l | grep Do
I get the result
drwx------+ 10 oliver staff 340 17 Nov 18:34 Documents
drwx------+ 17 oliver staff 578 20 Nov 18:57 Downloads
remember to pipe the grep command
From grep man page:
Grep searches the named input FILEs (or standard input if no files
are
named, or the file name - is given) for lines containing a match to the
given PATTERN.
If you don't provide file name(s) for it to use, it will try to read from stdin.
Try grep test *
As per GNU Grep 3.0
A file named - stands for standard input. If no input is specified,
grep searches the working directory . if given a command-line
option specifying recursion; otherwise, grep searches standard input.
So for OP's command, without any additional specification, grep tries to search in standard input, which is not actually provided there.
A simple approach is grep -r [pattern], as per the above, to specify recursion with -r and search in current directory and sub-directories.
Also note that wildcard * only includes files, not directories. If used, a prompt might be shown for hint:
grep: [directory_name]: Is a directory

Resources