Stream edit text file and replace string with increments - node.js

I want to duplicate markdown files to ~200 files and replace some strings in the document with incrementing changes. How should i do this? I am a new web developer, and my familiarity is just so-so with PHP and Node.js, but decent Linux user. I am thinking of using sed but can't wrap my mind around it.
Let's say i want to duplicate:
post.md "this is post _INCREMENT_"
then I run:
run generate "post.md" --number 2
to generate:
post1.md "this is post 1"
post2.md "this is post 2"

Combining sed and bash, would you please try the following:
#!/bin/bash
# show usage on improper arguments
usage() {
echo "usage: $1 filename.md number"
exit 1
}
file=$1 # obtain filename
[[ -n $file ]] || usage "$0" # filename is not specified
num=$2 || usage "$0" # obtain end bumber
(( num > 0 )) || usage "$0" # number is not specified or illegal
for (( i = 1; i <= num; i++ )); do
new="${file%.md}$i.md" # new filename with the serial number
cp -p -- "$file" "$new" # duplicate the file
sed -i "s/_INCREMENT_/$i/" "$new"
# replace "_INCREMENT_" with the serial number
done
Save the script above as a file generate or whatever, then run:
bash generate post.md 2

Related

find a file in a folder using a bash script (linux)

need to create a text file within the folder (done)
I use nano to create a file called spn.txt and input some data inside. (done)
4:3
5:5
1:5
3:1
4:3
4:1
create a script
2.1. ask user to input the name of the text file (txt) in the folder
2.2 find the txt file
if there no file found
ask user to input the name of the text file (txt) in the folder
2.3. if have the file read the value of the file and display the number of the left hand side
#!/bin/bash
echo "please input file name with extension (e.g spn.txt)"
read filename
how to create a while command that
look for the filename
if filename not found in the folder, ask user to input file name
if file name is found. display the number of the left hand side.
Disclaimer: this does not define a loop and hence maybe does not answer your question.
This said, if you need to quickly manage some user interactions you can try Zenity
#!/bin/bash
FILE=`zenity --file-selection --title="Select a File"`
case $? in
0)
echo "\"$FILE\" selected.";;
1)
echo "No file selected.";;
-1)
echo "An unexpected error has occurred.";;
esac
Obviously, you need Zenity to be installed, which can be a problem regarding the portability. The advantage is that you can create "complex" input forms. It all depends on your context.
You can easily achieve this with the following commands:
#!/bin/bash
filename=""
echo -e "\nStarting script ... You can press Ctrl+C anytime to stop program execution.\n"
##### Ensure the user will provide a valid file name #################
while [[ ! -r "${filename}" ]] ; do
echo -n "Please, enter the file name: (e.g spn.txt): "
read filename
done
#
# At this point we have a valid filename supplied by the user
# So, prints the left values of it, using the ':' as a field delimiter
cut -d':' -f 1 ${filename}
The cut command is used above with the following parameters:
-d':' -> Set the delimiter to ':'
-f 1 -> Display just the field number 1. If you would like to display just the right numbers, you could use -f 2 here, because assuming the field delimiter is the character ':', the right numbers are located on the second field.
#!/bin/bash
### Maintainer: Vrej Abramian, Git: https://github.com/vrej-ab, vrejab#gmail.com ###
### Configuration Parameters Start ###
request_filename_message="Please input file name with extension (e.g spn.txt): "
filename_variable='filename'
attempts_count='3'
_file_check='' # Leave this empty
### Configuration Parameters End ###
### Finctions Start ###
# Unified messages
echo_green(){ echo -en "\033[0;32m$*" ;echo -e "\033[0;0m"; }
echo_yellow(){ echo -en "\033[1;33m$*" ;echo -e "\033[0;0m"; }
echo_red(){ echo -en "\033[0;31m$*" ;echo -e "\033[0;0m"; }
# Promt and request to input a valid `filename`
request_filename_function(){
# Use `read` with `-p` option which will prompt your message and
# avoid using extra `echo` command.
read -p "${request_filename_message}" "${filename_variable}"
}
# Check if the inputed filename is accessible
file_check(){
if [ -r "${!filename_variable}" ] ;then
echo_green "\n[INFO]: ${!filename_variable} - file is available."
_file_check='pass'
else
echo_red "\n[ERROR]: ${!filename_variable} - file is either un-available or un-accessible."
_file_check='fail'
fi
}
# If provided `filename` wouldn't be available, this provides retrying chances
retry_function(){
local i=1
while [ "${_file_check}" != 'pass' ] && [ ${i} -lt ${attempts_count} ] ;do
echo_yellow "\n[WARN]: Please tray again - $(( attempts_count-i )) times left!"
local i=$(( ++i ))
request_filename_function
file_check
done
if [ ${i} -gt ${attempts_count} ] ;then
echo_red "[FATAL]: No more attempts, Bye-Bye...\n"
exit 1
fi
}
# Get the provided file's first column data - assuming that the delimiter is a ':' character in the file
left_handside_numbers_function(){
cat "${1}" | awk -F':' '{print $1}'
}
# Filter the user defined param,eters in this script
defined_parameters_function(){
defined_parameters=$(sed -n '/^### Configuration Parameters Start ###/,/^### Configuration Parameters End ###/p' "${0}" | \
grep -v '^\#' | awk -F'=' '{print $1}')
defined_parameters="${filename_variable} ${defined_parameters}"
}
# Run cleanup jobs
cleanup_function(){
unset ${defined_parameters}
}
### Functions End ###
### Script Body Start ###
request_filename_function
file_check
retry_function
left_handside_numbers_function "${!filename_variable}"
defined_parameters_function
cleanup_function
### Script Body End ###
exit 0

Calling a function that decodes in base64 in bash

#!/bin/bash
#if there are no args supplied exit with 1
if [ "$#" -eq 0 ]; then
echo "Unfortunately you have not passed any parameter"
exit 1
fi
#loop over each argument
for arg in "$#"
do
if [ -f arg ]; then
echo "$arg is a file."
#iterates over the files stated in arguments and reads them $
cat $arg | while read line;
do
#should access only first line of the file
if [ head -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ tail -n 1 "$arg" ]; then
process line
echo "Script has ran successfully!"
exit 0
#if it accesses any other line of the file
else
echo "We only process the first and the last line of the file."
fi
done
else
exit 2
fi
done
#function to process the passed string and decode it in base64
process() {
string_to_decode = "$1"
echo "$string_to_decode = " | base64 --decode
}
Basically what I want this script to do is to loop over the arguments passed to the script and then if it's a file then call the function that decodes in base64 but just on the first and the last line of the chosen file. Unfortunately when I run it even with calling a right file it does nothing. I think it might be encountering problems with the if [ head -n 1 "$arg" ]; then part of the code. Any ideas?
EDIT: So I understood that I am actually just extracting first line over and over again without really comparing it to anything. So I tried changing the if conditional of the code to this:
first_line = $(head -n 1 "$arg")
last_line = $(tail -n 1 "$arg")
if [ first_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
#should access only last line of the file
elif [ last_line == line ]; then
process line
echo "Script has ran successfully!"
exit 0
My goal is to iterate through files for example one is looking like this:
MTAxLmdvdi51awo=
MTBkb3duaW5nc3RyZWV0Lmdvdi51awo=
MXZhbGUuZ292LnVrCg==
And to decode the first and the last line of each file.
To decode the first and last line of each file given to your script, use this:
#! /bin/bash
for file in "$#"; do
[ -f "$file" ] || exit 2
head -n1 "$file" | base64 --decode
tail -n2 "$file" | base64 --decode
done
Yea, as the others already said the true goal of the script isn't really clear. That said, i imagine every variation of what you may have wanted to do would be covered by something like:
#!/bin/bash
process() {
encoded="$1";
decoded="$( echo "${encoded}" | base64 --decode )";
echo " Value ${encoded} was decoded into ${decoded}";
}
(( $# )) || {
echo "Unfortunately you have not passed any parameter";
exit 1;
};
while (( $# )) ; do
arg="$1"; shift;
if [[ -f "${arg}" ]] ; then
echo "${arg} is a file.";
else
exit 2;
fi;
content_of_first_line="$( head -n 1 "${arg}" )";
echo "Content of first line: ${content_of_first_line}";
process "${content_of_first_line}";
content_of_last_line="$( tail -n 1 "${arg}" )";
echo "Content of last line: ${content_of_last_line}";
process "${content_of_last_line}";
line=""; linenumber=0;
while IFS="" read -r line; do
(( linenumber++ ));
echo "Iterating over all lines. Line ${linenumber}: ${line}";
process "${line}";
done < "${arg}";
done;
some additions you may find useful:
If the script is invoked with multiple filenames, lets say 4 different filenames, and the second file does not exist (but the others do),
do you really want the script to: process the first file, then notice that the second file doesnt exist, and exit at that point ? without processing the (potentially valid) third and fourth file ?
replacing the line:
exit 2;
with
continue;
would make it skip any invalid filenames, and still process valid ones that come after.
Also, within your process function, directly after the line:
decoded="$( echo "${encoded}" | base64 --decode )";
you could check if the decoding was successful before echoing whatever the resulting garbage may be if the line wasnt valid base64.
if [[ "$?" -eq 0 ]] ; then
echo " Value ${encoded} was decoded into ${decoded}";
else
echo " Garbage.";
fi;
--
To answer your followup question about the IFS/read-construct, it is a mixture of a few components:
read -r line
reads a single line from the input (-r tells it not to do any funky backslash escaping magic).
while ... ; do ... done ;
This while loop surrounds the read statement, so that we keep repeating the process of reading one line, until we run out.
< "${arg}";
This feeds the content of filename $arg into the entire block of code as input (so this becomes the source that the read statement reads from)
IFS=""
This tells the read statement to use an empty value instead of the real build-in IFS value (the internal field separator). Its generally a good idea to do this for every read statement, unless you have a usecase that requires splitting the line into multiple fields.
If instead of
IFS="" read -r line
you were to use
IFS=":" read -r username _ uid gid _ homedir shell
and read from /etc/passwd which has lines such as:
root:x:0:0:root:/root:/bin/bash
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
then that IFS value would allow it to load those values into the right variables (in other words, it would split on ":")
The default value for IFS is inherited from your shell, and it usually contains the space and the TAB character and maybe some other stuff. When you only read into one single variable ($line, in your case). IFS isn't applied but when you ever change a read statement and add another variable, word splitting starts taking effect and the lack of a local IFS= value will make the exact same script behave very different in different situations. As such it tends to be a good habbit to control it at all times.
The same goes for quoting your variables like "$arg" or "${arg}" , instead of $arg . It doesn't matter when ARG="hello"; but once the value starts containing spaces suddenly all sorts of things can act different; suprises are never a good thing.

Improve Bash script to obtain file names, remove part of their names, and display the result until a key is entered

I have fully working code that obtains a list of file names from the '$testDir', removes the final 3 characters from each, runs a different script using the adjusted names, and displays a list of the edited names that had errors when used in the seperate script, until the letter 'q' is entered by the user.
Here is the code:
#!/bin/bash
# Declarations
declare -a testNames
declare -a errorNames
declare -a errorDescription
declare resultDir='../Results'
declare outputCheck=''
declare userEntry=''
declare -i userSelect
# Obtain list of files in $resultDir and remove the last 3 chars from each file name
for test in `ls $resultDir`; do
testNames+=("${test::-3}")
done
# Run 'checkFile.sh' script for each adjusted file name and add name and result to apporopriate arrays if 'checkFile.sh' script fails
for f in "${testNames[#]}"; do
printf '%s' "$f: "
outputCheck=$(./scripts/checkFile.sh -v "${f}" check)
if [[ $outputCheck != "[0;32mNo errors found.[0m" ]];
then
errorNames+=("$f")
errorDescription+=("$outputCheck")
echo -e '\e[31mError(s) found\e[0m'
else
printf '%s\n' "$outputCheck"
fi
done
#Prompts user to save errors, if any are present
if [ "${#errorNames[#]}" != 0 ];
then
until [[ $userEntry = "q" ]]; do
echo "The following tests had errors:"
for(( i=1; i<=${#errorNames[#]}; i++ )); do
echo -e $i: "\e[31m${errorNames[i-1]}\e[0m"
done
echo "Enter the corresponding number in the list to save the errors found or enter 'q' to quit"
read -r userEntry
numInput=$userEntry
if [ $numInput -gt 0 -a $numInput -le ${#errorNames[#]} ];
then
mkdir -p ./Errors
echo -e "File" "\e[96m${errorNames[$userEntry-1]}""_Error_Info.txt\e[0m created in the 'Errors' folder which contains details of the error(s)"
echo "${errorDescription[$userEntry-1]}" > "./Errors/""${errorNames[$userEntry-1]}""_Error_Info.txt"
fi
done
echo 'Successfully Quit'
exit $?
fi
echo 'No errors found'
exit $?
As someone who is new to Bash and programming concepts, I would really appreciate any suggested improvements to this code. In particular, the script needs to run quickly as the 'checkFile.sh' script takes a long time to run itself. I have a feeling the way I have written the code is not concise either.

Loop command, piping previous command's output

I understand that you can use loops in bash to repeat a command a certain amount of times, although it is more conveniently used within bash scripts.
For example, say I have a file that has been compressed several times, and I wish to fully decompress it.
cat file.txt | gzip -d | gzip -d | gzip -d
This is practical enough for a file that has been compressed 3 times, but it would become unwieldy if the file was compressed, for example, 18 times. How could this be simplified? I want to run the gzip -d command on the previous command's output n times. Is there a way to execute this from the command line?
You could do it with something along those lines (pardon any syntax error, consider this pseudo-code close to bash syntax) :
#!/bin/bash
# $1 = iterations left
# $2 = final output file
recursive_gzip()
{
if
[[ "$1" -gt 0 ]]
then
gzip -d | recursive_gzip $(($1 - 1)) "$2"
else
gzip -d > "$2"
fi
}
recursive_gzip 18 "file.txt" <"file.txt.gz"
Please note I replaced your cat with a redirection.
You could generalize the idea to share the same function for compress / decompress, and actually make it work for an arbitrary command by using it as positional arguments after 2.
#!/bin/bash
# $1 = iterations left
# $2 = final output file
recursive_pipe()
{
if
[[ "$1" -gt 0 ]]
then
"${#:3}" | recursive_pipe $(($1 - 1)) "$2" "${#:3}"
else
"${#:3}" > "$2"
fi
}
# Create gzipped file
recursive_pipe 18 "file.txt.gz" gzip <"file.txt"
# Uncompress gzipped file
recursive_pipe 18 "file.txt" gzip -d <"file.txt.gz"
What if you can't remember or wasn't told how often the file was compressed?
You would like to gunzip until it is a normal file.
When you do not want to overwrite your original file, making a copy is the easiest thing:
tmpfile=/tmp/mightbegz
cp file.txt ${tmpfile};
while [ $? -eq 0 ]; do
echo "Gunzipping Again"
mv ${tmpfile} ${tmpfile}.gz
gunzip -d ${tmpfile}.gz
done
The while loop will stop when gzip can not gunzip the file.

bash scripting reading numbers from a file

Hello i need to make a bash script that will read from a file and then add the numbers in the file. For example, the file im reading would read as:
cat samplefile.txt
1
2
3
4
The script will use the file name as an argument and then add those numbers and print out the sum. Im stuck on how i would go about reading the integers from the file and then storing them in a variable.
So far what i have is the following:
#! /bin/bash
file="$1" #first arg is used for file
sum=0 #declaring sum
readnums #declaring var to store read ints
if [! -e $file] ; do #checking if files exists
echo "$file does not exist"
exit 0
fi
while read line ; do
do < $file
exit
What's the problem? Your code looks fine, except readnums is not a valid command name, and you need spaces inside the square brackets in the if condition. (Oh and "$file" should properly be inside double quotes.)
#!/bin/bash
file=$1
sum=0
if ! [ -e "$file" ] ; do # spaces inside square brackets
echo "$0: $file does not exist" >&2 # error message includes $0 and goes to stderr
exit 1 # exit code is non-zero for error
fi
while read line ; do
sum=$((sum + "$line"))
do < "$file"
printf 'Sum is %d\n' "$sum"
# exit # not useful; script will exit anyway
However, the shell is not traditionally a very good tool for arithmetic. Maybe try something like
awk '{ sum += $1 } END { print "Sum is", sum }' "$file"
perhaps inside a snippet of shell script to check that the file exists, etc (though you'll get a reasonably useful error message from Awk in that case anyway).

Resources