"Recursive hexdump" from command line, input and output with same name [duplicate] - linux

This question already has answers here:
Looping over pairs of values in bash [duplicate]
(6 answers)
Closed 5 years ago.
I have some files in a directory "documents" (file1, file2, ...) and I would like to save them in another directory "documents_hex" with hexdump from command line. There is a way to use hexdump for each file in "documents" and save them in "documents_hex" ("documents_hex" is inside "documents") with the same name in input and output?
Example: file1 to /documents_hex/file1, file2 to /documents_hex/file2, ...

Check this code :
for file in `ls documents`
do
hexdump -x $file > documents_hex/$file
done

Related

In Linux shell scripting if we want remove dupicate line, how can i do that except sort -u command [duplicate]

This question already has answers here:
How to delete duplicate lines in a file without sorting it in Unix
(9 answers)
Closed last month.
write a script name to read file name from the end user and remove duplicate line from in that file.
#! /bin/bash
read -p "Enter any file name to remove duplicate line:" $fname
sort -u $fname > tmp.txt
mv tmp.txt > $fname
here duplicate line will be remove but my content will be sorted but i don't that what should i do.
i want another method to remove duplicate line in shell scripting.
You can remove duplicate lines by the 'uniq' command.

Shell script make lines in one huge file into two seperate files in one go? [duplicate]

This question already has answers here:
How to save both matching and non-matching from grep
(3 answers)
Closed 1 year ago.
Currently My shell script iterate the lines in one huge file two times:
(What I want to do is just like the shell script below.)
grep 'some_text' huge_file.txt > lines_contains_a.txt
grep -v 'some_text' huge_file.txt > lines_not_contains_a.txt
but it is slow.
How to do the same thing only iterate the lines once?
Thanks!
With GNU awk:
awk '/some_text/ { print >> "lines_contains_a.txt" }
!/some_text/ { print >> "lines_not_contains_a.txt" }' huge_file.txt
With sed:
sed -n '/some_text/ w lines_contains_a.txt
/some_text/! w lines_not_contains_a.txt' huge_file.txt

How to source a key value paired file in bash escaping whitespace? [duplicate]

This question already has answers here:
Use key/value data from a file in a shell script
(1 answer)
Reading key/value parameters from a file into a shell script
(1 answer)
Closed 3 years ago.
$ cat foo.txt
a=1one
b=2two
c=3 three
d=4four
$ source foo.txt
bash: three: command not found...
Need to set all the variable listed in foo.txt, how to source this file by escaping the space character? foo.txt comes from other application, which I cannot control, or is there an alternative to source ?
If the output is so regular, you could try to preprocess the file using sed like this:
$ sed -e "s/=/='/;s/$/'/" < foo.txt >sourced.env
and then source sourced.env. This will add a ' just after the = and add an ending '.

Concatenating multiple text files by arguments in a script into a single file in Bash [duplicate]

This question already has answers here:
How to cat multiple files from a list of files in Bash?
(6 answers)
Closed 4 years ago.
How should i concatenate multiple text files get by arguments in terminal using a script in Bash?
#!/bin/bash
while read $1
do
cat $1 > cat.txt
done
I tried that example but it is not working.
You should use '>>' to concatenate ('>' will create a new file each time):
for file in "$#"
do
cat $file >> result
done

Deleting a certain line on linux [duplicate]

This question already has answers here:
How to delete from a text file, all lines that contain a specific string?
(21 answers)
Closed 6 years ago.
For example I have a caf.txt file and I want to delete a "donut" word in the document without entering the document on linux .How can I do it?
To delete just the word "donut"
sed -i 's/donut//g' caf.txt
To delete lines that contain the word "donut"
sed -i '/donut/d' caf.txt
What I do is:
sed '/text_to_delete/d' filename | sponge filename
This will make the change to the source file.

Resources