Cat changes owner? - linux

I have to edit a file owned by root via ssh. I add a entry in the file, preserve the first 9 lines and reorder the rest to a temporary file. I know that > overwrittes what's in the file (and that's what i want) but I need to preserve the root as owner of file. How can I do this? Thanks!
#!/bin/bash
user=""
echo "User:"
read user
ssh xxxx#xxxx "
sed -i '\$a$user' file;
(head -n 9 file ; tail -n +10 file | sort) > temp;
cat temp > file;
rm -f temp
"

It's not cat that's changing the owner, it's sed. When you use sed -i, it does something like:
mv file file.bak
sed '\$a$user' file.bak > file
rm file.bak
As you can see, this creates a new file with the original file's name, and it's owned by the user that creates the file.
If you want to avoid this, make a copy of the original file instead of using the -i option.
cp file /tmp/file.$$
sed '\$a$user' /tmp/file.$$ > file
rm /tmp/file.$$
Or you could just put sed into your pipeline:
sed '\$a$user' file | head -n 9 file ; tail -n +10 file | sort > temp
cat temp > file
rm temp

Been a while since I wrote in BASH but i think a starting point would be
chown root $file // if you have a variable with the file name in
or
chown root thefile.txt //if you want it hard coded;
another variable in the equation would be who has ownership of the application cat? i think whoever is the owner of the running application, thats how the ownership of the files its out put is decided
maybe you could also try
$ sudo cat temp > file
because the session would then belong to the root and therefore the ouput would belong to the root???

I take it that the user who logs in cannot become root?
THen your best bet is to use dd
dd if=tmpfile of=outfile
of course, do all the ordering, seding, awking and greping on your tmp file. dd in this usage is equivalent > without creating a new file

#!/bin/bash
user=""
echo "User:"
read user
ssh xxxx#xxxx "
sed -i '\$a$user' file;
(head -n 9 file ; tail -n +10 file | sort) > temp;
sudo mv temp file;
sudo chown root file
"
This will work better if on the xxxx machine the xxxx user you're logging in as has password-less access to sudo. You can do this by having this entry in your /etc/sudoers file there:
xxxx ALL=NOPASSWD: ALL

Related

lftp delete multiples files with Bash

I try to create a script who delete all the olds files except the three more recent files on my backup directory with lftp.
I have try to do this with ls -1tr who return all the files in ascending date order, and after I do a head -$NB_BACKUP_TO_RM ($NB_BACKUP_TO_RM is the numbers of files that I want to delete in my lists), this two commands return the correct files.
After this I want to remove all of them, so I do a xargs rm --, but Bash returns that the files don't exist... I think this command is not running into the remote directory, but in the local directory, and I don't know what I can do for delete this files (of my return lists).
Here is the full code:
MAX_BACKUP=3
NB_BACKUP=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | wc -l ; quit" -u $USER,$PASSWORD $HOST)
if (( $NB_BACKUP > $MAX_BACKUP ))
then
NB_BACKUP_TO_RM=$(($NB_BACKUP-$MAX_BACKUP))
REMOVE=$(lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xargs rm -- ; quit" -u $USER,$PASSWORD $HOST)
echo $REMOVE
fi
Have you an idea of the problem? How can I delete the files of my lists (after ls -1tr $REMOTE_DIR/full_backup_ftp* and head -$NB_BACKUP_TO_RM)
Thanks for your help
Starting SFTP connection can be time consuming. Slightly modified solution to avoid multiple lftp sessions below. It will perform much better the the alternative solution, especially if large number of files have to be purged.
Basically, leveraging lftp flexibility to mix lftp command with external commands. It creates a command file with a series of 'rm' (leveraging head ,xargs, ...), and executing those commands INSIDE the same lftp session.
Also note that lftp 'ls' does not allow wildcard, use 'cls' instead
Make sure you test this carefully, because of potential removal of important files
lftp -e $USER,$PASSWORD $HOST <<__CMD__
cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {} > rm_list.txt
source rm_list.txt
__CMD__
Or with one liner, using 'lftp' ability to execute dynamically generated command (source -e). It eliminate the temporary file.
lftp -e $USER,$PASSWORD $HOST <<__CMD__
source -e 'cls -1tr $REMOTE_DIR/full_backup_ftp* | head -$NB_BACKUP_TO_RM | xarg -I{} echo rm {}'
__CMD__
Looks xargs is unknown cmd for lftp after man lftp. And xargs rm is deleting local files not remote files.
so please use xargs as below, it works for me.
lftp -e "ls -1tr $REMOTE_DIR/full_backup_ftp*; quit" -u $USER,$PASSWORD $HOST | head -$NB_BACKUP_TO_RM | xargs -I {} lftp -e 'rm '{}'; quit' -u $USER,$PASSWORD $HOST

How read paths in text file and get the file count under that paths

I have a text file which contains multiple paths like below
$ cat directory.txt
/aaaa/bbbbb/ccccc/
/aaaa/bbbbb/eeeee/
/aaaa/bbbbb/ddddd/
I need to change directory to each path in text file and need to get count of files under that paths.Below is the code i used, But it is not working.
i=cat /aaaa/bbbbb/directory.txt
while read $i ;do
cd $i
ls |wc -l
done < /aaaa/bbbbb/count.txt
Actually you're almost there. The line i=... is not needed, read $i should be read i, and you simply need to call ls with the path instead of cd it first.
#!/bin/bash
while read i; do
ls "$i" | wc -l
done < "/xxx/yyy/count.txt"
Thanks every one i tried this code it is working fine
!/bin/bash
for i in cat /nrt/home/directory.txt;
do
cd $i
ls | wc -l
done > /nrt/home/count.txt

Run a bash script on an entire directory

I have a bash script that removes the first five lines and last nine lines on a user specified file. I need it to run on ~100 files a day, is there any way to run the script against the entire directory and then output every file to a separate directory with the same file name? Here is the script I am using.
read -e file
$file | sed '1,7d' | head -n -9 > ~/Documents/Databases/db3/$file
rm $file
I set it up to loop because of so many files but that is the core of the script.
You can do:
for file in ~/Documents/Databases/db3/*; do
[ -f "$file" ] && sed '1,7d' "$file" | head -n -9 > /out/dir/"${file##*/}"
done
Assuming the input directory is ~/Documents/Databases/db3/ and the output directory is /out/dir/.

Why can't this script execute the other script

This script looks for all users that have the string RECHERCHE inside them. I tried running it in sudo and it worked, but then stopped at line 8 (permission denied). Even when removing the sudo from the script, this issue still happens.
#!/bin/bash
#challenge : user search and permission rewriting
echo -n "Enter string to search : "
read RECHERCHE
echo $(cat /etc/passwd | grep "/home" | cut -d: -f5 | grep -i "$RECHERCHE" | sed s/,//g)
echo "Changing permissions"
export RECHERCHE
sudo ./challenge2 $(/etc/passwd) &
The second script then changes permissions of each file belonging to each user that RECHERCHE found, in the background. If you could help me figure out what this isn't doing right, it would be of great service. I
#!/bin/bash
while read line
do
if [-z "$(grep "/home" | cut -d: -f5 | grep -i "$RECHERCHE")" ]
then
user=$(cut -f: -f1)
file=$(find / -user $(user))
if [$(stat -c %a file) >= 700]
then
chmod 700 file 2>> /home/$(user)/challenge.log
fi
if [$(stat -c %a file) < 600]
then
chmod 600 file 2>> /home/$(user)/challenge.log
fi
umask 177 2>> /home/$(user)/challenge.log
fi
done
I have to idea what I'm doing.
the $(...) syntax means command substitution, that is: it will be replaced by the output of the command within the paranthesis.
since /etc/passwd is no command but just a text-file, you cannot execute it.
so if you want to pass the contents of /etc/passwd to your script, you would just call it:
./challenge2 < /etc/passwd
or, if you need special permissions to read the file, something like
sudo cat /etc/passwd | ./challenge2
also in your challenge2 script, you are using $(user) which is wrong as you really only want to expand the user variable: use curly braces for this, like ${user}
/etc/passwd?
not what you were asking, but you probably should not read /etc/passwd directly anyhow.
if you want to get a list of users, use the following command:
$ getent passwd
this will probably give you more users than those stored in /etc/passwd, as your system might use other PAM backends (ldap,...)

Linux commands to copy one file to many files

Is there a one-line command/script to copy one file to many files on Linux?
cp file1 file2 file3
copies the first two files into the third. Is there a way to copy the first file into the rest?
Does
cp file1 file2 ; cp file1 file3
count as a "one-line command/script"? How about
for file in file2 file3 ; do cp file1 "$file" ; done
?
Or, for a slightly looser sense of "copy":
tee <file1 file2 file3 >/dev/null
just for fun, if you need a big list of files:
tee <sourcefile.jpg targetfiles{01-50}.jpg >/dev/null- Kelvin Feb 12 at 19:52
But there's a little typo. Should be:
tee <sourcefile.jpg targetfiles{01..50}.jpg >/dev/null
And as mentioned above, that doesn't copy permissions.
You can improve/simplify the for approach (answered by #ruakh) of copying by using ranges from bash brace expansion:
for f in file{1..10}; do cp file $f; done
This copies file into file1, file2, ..., file10.
Resource to check:
http://wiki.bash-hackers.org/syntax/expansion/brace#ranges
for FILE in "file2" "file3"; do cp file1 $FILE; done
You can use shift:
file=$1
shift
for dest in "$#" ; do
cp -r $file $dest
done
cat file1 | tee file2 | tee file3 | tee file4 | tee file5 >/dev/null
(no loops used)
To copy the content of one file (fileA.txt) to many files (fileB.txt, fileC.txt, fileD.txt) in Linux,
use the following combination cat and tee commands:
cat fileA.txt | tee fileB.txt fileC.txt fileD.txt >/dev/null
applicable to any file extensions
only file names and extensions change, everything else remains same.
Use something like the following. It works on zsh.
cat file > firstCopy > secondCopy > thirdCopy
or
cat file > {1..100} - for filenames with numbers.
It's good for small files.
You should use the cp script mentioned earlier for larger files.
I'd recommend creating a general use script and a function (empty-files), based on the script, to empty any number of target files.
Name the script copy-from-one-to-many and put it in your PATH.
#!/bin/bash -e
# _ _____
# | |___ /_ __
# | | |_ \ \/ / Lex Sheehan (l3x)
# | |___) > < https://github.com/l3x
# |_|____/_/\_\
#
# Copy the contents of one file to many other files.
source=$1
shift
for dest in "$#"; do
cp $source $dest
done
exit
NOTES
The shift above removes the first element (the source file path) from the list of arguments ("$#").
Examples of how to empty many files:
Create file1, file2, file3, file4 and file5 with content:
for f in file{1..5}; do echo $f > "$f"; done
Empty many files:
copy-from-one-to-many /dev/null file1 file2 file3 file4 file5
Empty many files easier:
# Create files with content again
for f in file{1..5}; do echo $f > "$f"; done
copy-from-one-to-many /dev/null file{1..5}
Create empty_files function based on copy-from-one-to-many
function empty-files()
{
copy-from-one-to-many /dev/null "$#"
}
Example usage
# Create files with content again
for f in file{1..5}; do echo $f > "$f"; done
# Show contents of one of the files
echo -e "file3:\n $(cat file3)"
empty_files file{1..5}
# Show that the selected file no longer has contents
echo -e "file3:\n $(cat file3)"
Don't just steal code. Improve it; Document it with examples and share it. - l3x
Here's a version that will preface each cp command with sudo:
#!/bin/bash -e
# Filename: copy-from-one-to-may
# _ _____
# | |___ /_ __
# | | |_ \ \/ / Lex Sheehan (l3x)
# | |___) > < https://github.com/l3x
# |_|____/_/\_\
#
# Copy the contents of one file to many other files.
# Pass --sudo if you want each cp to be perfomed with sudo
# Ex: copy-from-one-to-many $(mktemp) /tmp/a /tmp/b /tmp/c --sudo
if [[ "$*" == *--sudo* ]]; then
maybe_use_sudo=sudo
fi
source=$1
shift
for dest in "$#"; do
if [ $dest != '--sudo' ]; then
$maybe_use_sudo cp $source $dest
fi
done
exit
You can use standard scripting commands for that instead:
Bash:
for i in file2 file3 ; do cp file1 $i ; done
The simplest/quickest solution I can think of is a for loop:
for target in file2 file3 do; cp file1 "$target"; done
A dirty hack would be the following (I strongly advise against it, and only works in bash anyway):
eval 'cp file1 '{file2,file3}';'
Go with the fastest cp operations
seq 1 10 | xargs -P 0 -I xxx cp file file-xxx
it means
seq 1 10 count from 1 to 10
| pipe it xargs
-P 0 do it in parallel - as many as needed
-I xxx name of each input xargs receives
cp file file-xxx means copy file to file-1, file-2, etc
and if name of files are different here is the other solutions.
First have the list of files which are going to be created. e.g.
one
two
three
four
five
Second save this list on disk and read the list with xargs just like before but without using seq.
xargs -P 0 -I xxx cp file xxx < list
which means 5 copy operations in parallel:
cp file one
cp file two
cp file three
cp file four
cp file five
and for xargs here is the behind the scene (5 forks)
3833 pts/0 Ss 0:00 bash
15954 pts/0 0:00 \_ xargs -P 0 -I xxx cp file xxx < list
15955 pts/0 0:00 \_ cp file one
15956 pts/0 0:00 \_ cp file two
15957 pts/0 0:00 \_ cp file three
15958 pts/0 0:00 \_ cp file four
15959 pts/0 0:00 \_ cp file five
I don't know how correct this is but i have used something like this
echo ./file1.txt ./file2.txt ./file3.txt | xargs -n 1 cp file.txt
Where echo ./file1.txt ... is destination of a file and use it to feed xargs with one "destination" by one. Therefore command xargs -n 1. And lastly cp file.txt, which is self explanatory i think :)

Resources