How to clean ctrl+z in linux file? - linux

I have copied some text files from windows to redhat machine using putty. When i try to execute the files . I am getting error because ctrl+z was added in that file.
I have used this command
tr -d '\15\32' < /path/gems/spec/rms.spec > /path/gems/spec/rms.spec
But the above command is annoying because I have 1000+ .spec files under various folder.
Is there any option in linux command to identify the .spec files in the directory and clean the ctrl+z added in the file.
Thanks in advance

If you are using bash, you can try with:
find . -type f -name '*.spec' -print0 | \
while IFS= read -r -d '' dot_spec_file; do \
cat "$dot_spec_file" | \
tr -d '\15\32' | \
sponge "$dot_spec_file" \
; done
Execute it on the parent directory of your .spec files.
sponge reads all its input before write the file; from its manual:
sponge reads standard input and writes it out to the specified file.
Unlike a shell redirect, sponge soaks up all its input before opening
the output file. This allows constructing pipelines that read from and
write to the same file.

Related

how to copy file to multiple sub directories linux

I have a file needs to copy unique directory call test
directory structure as below
/contentroot/path/a/x/test
/contentroot/path/a/y/test
/contentroot/path/a/z/test
--------------------------
as above I have more then 250 combination test directory
I have try below command ( by using asterisk) but it's only copy one test directly only and giving issue (cp: omitting directory )
cp myfile.txt /contentroot/path/a/*/test
any Help
Perhaps a for loop?
for FOLDER in /contentroot/path/a/*/test; do
cp myfile.txt $FOLDER
done
You can feed the output of an echo as an input to xargs. xargs will then run the cp command three times, appending the next directory path piped to it from the echo.
The -n 1 option on the xargs command is so it only appends one of those arguments at a time to the cp each time it runs.
echo /contentroot/path/a/x/test /contentroot/path/a/y/test /contentroot/path/a/z/test | xargs -n 1 cp myfile.txt
Warnings! Firstly this will over-write files (if they exist) and secondlt any bash command should be tested and used at the runners risk! ;)

Loop in bash script

I have a directory containing gzipped datafiles. I want to run each file using the script est_abundance.py. But first i need to unzip them. So i have this bash:
for file in /home/doy.user/scratch1/Secoutput/; do
cd "$file"
gunzip *kren.gz
python analysis1.py -i /Secoutput/*kren -k gkd_output -o /bracken_output/$(basename *kren).txt
wait
done
The problem is, the bash script keeps on unzipping all of the datafiles, it does not continue to the next command after unzipping one file.
Can you help me correct this? I just want every command to be done for every file.
Use, notice that you should use $file variable, and you can get the name of the file after unzipping by stripping the .gz part using ${file%.gz}:
for file in /home/doy.user/scratch1/Secoutput/*; do
gunzip $file
python analysis1.py -i ${file%.gz} -k gkd_output -o /bracken_output/$(basename ${file%.gz}).txt
wait
done

In bash, how can I delete a file, whose name was written in a file?

For example, I have a file "~/garbage_log", the content of it is always change.
cat garbage_log
~/garbage/garbage.png
~/garbage/garbage2.jpg
now, I want a bash command to delete the file in log,how can I do it?
You can use this simple xargs command to run rm command on each line of garbage_log file:
xargs rm < garbage_log
You can use rm to delete the files and xargs to execute the command while piping the content of the previous command.
You can do it like this:
cat garbage_log | xargs sudo rm

Script to open latest text file from a directory

I need a shell script to open latest text file from a given directory. it will be then copied to another directory. How can i achieve it?
I need a logic which will search and give the latest file from a directory (name of the text file can be anything (not fixed), so i need to find out latest text file)
Here you can do something like this
#!/bin/sh
SOURCE_DIR=/home/juned/Downloads
DEST_DIR=/tmp/
LAST_MODIFIED_FILE=`ls -t ${SOURCE_DIR}| head -1`
echo $LAST_MODIFIED_FILE
#Open file
vim $SOURCE_DIR/$LAST_MODIFIED_FILE
#Copy file
cp $SOURCE_DIR/$LAST_MODIFIED_FILE $DEST_DIR
echo "File copied successfully"
You can specify any application name in which you want to open that file like gedit, kate etc. Here I've used vim.
xdg-open - opens a file or URL in the user's preferred application
Not an expert in bash but you can try this logic:
First, grab the latest file using ls -t -t sorts by time head -1 gets the first file
F=`ls -t * | head -1`
Then open the file using and editor:
xdg-open $F
gedit $F
...
As suggested by # AJefferiss you can directly do :
xdg-open $(ls -t * | head -1)
gedit $(ls -t * | head -1)
For editing the latest modified / created,
vim $(ls -t | head -1)
For editing the latest in alphanumerical order,
vim $(ls -1 | tail -1)
In one line (if are you sure that there are only files):
vim `ls -t .|head -1`
it will be opened in vim (or use other txt editor)
if there are directories you should write script with loop and test every file (if it's not a dir):
if [ -f $FILE ];
or you can also use find, or use pipe for get latest file:
ls -lt .|sed -n 2p|grep -v '^d'
The existing answers are helpful, but fall short when it comes to dealing with filenames with embedded spaces or other shell metacharacters.[1]
# Get the most recently modified *.txt file.
# (On *assignment*, names with spaces, ... are not a concern.)
f=$(ls -t *.txt | head -n 1)
# *Use* the variable enclosed in *double-quotes* to ensure that it is passed
# to the target command unmodified.
xdg-open "$f" # could also use "$(ls -t *.txt | head -n 1)" directly
Additionally, some answer user all-uppercase shell variable names, which should be avoided so as to avoid conflicts with environment variables.
[1] Due to use of ls, filenames with embedded newlines won't be handled correctly, but that's rarely a real-world concern.

Remove all files of a certain type except for one type in linux terminal

On my computer running Ubuntu, I have a folder full of hundreds files all named "index.html.n" where n starts at one and continues upwards. Some of those files are actual html files, some are image files (png and jpg), and some of them are zip files.
My goal is to permanently remove every single file except the zip archives. I assume it's some combination of rm and file, but I'm not sure of the exact syntax.
If it fits into your argument list and no filenames contain colon a simple pipe with xargs should do:
file * | grep -vi zip | cut -d: -f1 | tr '\n' '\0' | xargs -0 rm
First find to find matching file, then file to get file types. sed eliminates other file types and also removes everything but the filenames from the output of file. lastly, rm for deleting:
find -name 'index.html.[0-9]*' | \
xargs file | \
sed -n 's/\([^:]*\): Zip archive.*/\1/p' |
xargs rm
I would run:
for f in in index.html.*
do
file "$f" | grep -qi zip
[ $? -ne 0 ] && rm -i "$f"
done
and remove -i option if you feel confident enough
Here's the approach I'd use; it's not entirely automated, but it's less error-prone than some other approaches.
file * > cleanup.sh
or
file index.html.* > cleanup.sh
This generates a list of all files (excluding dot files), or of all index.html.* files, in your current directory and writes the list to cleanup.sh.
Using your favorite text editor (mine happens to be vim), edit cleanup.sh:
Add #!/bin/sh as the first line
Delete all lines containing the string "Zip archive"
On each line, delete everything from the : to the end of the line (in vim, :%s/:.*$//)
Replace the beginning of each line with "rm" followed by a space
Exit your editor, updating the file.
chmod +x cleanup.sh
You should now have a shell script that will delete everything except zip files.
Carefully inspect the script before running it. Look out for typos, and for files whose names contain shell metacharacters. You might need to add quotation marks to the file names.
(Note that if you do this as a one-line shell command, you don't have the opportunity to inspect the list of files you're going to delete before you actually delete them.)
Once you're satisfied that your script is correct, run
./cleanup.sh
from your shell prompt.
for i in index.html.*
do
$type = file $i;
if [[ ! $file =~ "Zip" ]]
then
rm $file
fi
done
Change the rm to a ls for testing purposes.

Resources