This is my following bash script
cat >> $file_name
And I receive this kind of error:
./l7.sh: line 12: $file_name: ambiguous redirect
Here are the full code
https://github.com/vats147/public/blob/main/l7.sh
And Why I am getting this error? even my syntax is correct.
Into the parameter file_name you must assign $1, which will pass to the current file as an input parameter.
#! /bin/bash
echo -e " Enter file name : \c"
read file_name=$1
if [ -f $file_name ]
then
if [ -w $file_name ]
then
echo " type some text data. to quit press enter "
#cat > $file_name(single angular bracket use for overwritten)
#cat >> $file_name(two angular bracket use for appending a text)
cat >> $file_name
else
echo " file not have write permission"
fi
else
echo "file not exist"
fi
These are positional arguments of the script.
Executing ./script.sh Hello World will make
$0 = ./script.sh
$1 = Hello
$2 = World
Note
If you execute ./script.sh, $0 will give output ./script.sh but if you execute it with bash script.sh it will give output script.sh.
I have a txt file which contain a big list of files. Each filename is 1 row in the list.
I was able to read it like this :
#!/bin/bash
set -e
in="${1:-file.txt}"
[ ! -f "$in" ] && { echo "$0 - File $in not found."; exit 1; }
while IFS= read -r file
do
echo "Copy $file ..."
done < "${in}"
What I actually want to achieve in the end is to read these lines, then issue cp command for first 20 or 30 or them, and then delete them from the file.txt and then do again same thing.
With readarray you can easily read a count of lines of input at a time:
while readrray -n 20 -t lines; do
for line in "${lines[#]}"; do
echo "Copy $file ..."
done
done < "${in}"
Below is my simple function to get user inputted file name, but for some reason my input validation isn't working.
function getname
{
echo "Please enter the name of the file to install: "
read filename
if (($args > 1))
then
echo "You entered to many arguments."
echo $USAGE
exit
fi
}
getname
Bash -x test1 yields these results, as if it doesn't see any value for $args:
bash -x test1
+ getname
+ echo 'Please enter the name of the file to install: '
Please enter the name of the file to install:
+ read filename
testfile
+ (( > 1 ))
test1: line 9: ((: > 1: syntax error: operand expected (error token is "> 1")
Why isn't this working?
Thanks!
There are many ways to ignore parts after spaces (awk, cut, sed could do the work), and even warning about such thing:
#!/bin/bash
echo "Input filename:"
read input
filename=$(echo $input | awk '{ print $1 }')
echo "Filename entered is: $filename"
[ "${filename}" != "${input}" ] && echo "(warning: parts after spaces were ignored)"
Also, using read conveniently, you could directly read what you want:
read filename garbage
You could consider convert spaces to underscores (or keep spaces as part of filename like windows guys ...):
read input
filename=$(echo $input | tr ' ' '_')
BRs
Hi below is my bash script. which takes a source file and a token file,
token file contains servicename:usage
I have to find servicename in source file line by line if found then calculate memory usage then change -Xmxm with -Xmx\d{1,3}m. In below script bold line explain what to do much simple
You can first under stand issue from below small part of script
line="Superviser.childOpts:-Xmx128m"
heapMB=750
line=($(echo $line|sed "s/${-Xmx\d{1,3}m}/$-Xmx{$heapMB}m/g"))
So what is the wrong in above line
#!/bin/bash
sourceFile=$1
tokenFile=$2
if [ -z $sourceFile ]
then
echo "Please provide a valid source file"
exit 0
fi
if [ -z $tokenFile ]
then
echo "Please provide a valid token file"
exit 0
fi
#read token file and tokenize with : to get service name at 0 index and percentage usages at 1
declare arr_token_name
declare arr_token_usage
count=0
while read line
do
#here line contain :percentage usages
OIFS="$IFS"
IFS=$':'
arr=($line)
IFS="$OIFS"
if [ ! -z $line ]
then
arr_token_name[$count]=${arr[0]}
arr_token_usage[$count]=${arr[1]}
count=`expr $count + 1`
fi
done
# read source file line by line test with all the tokens
totalMemKB=$(awk '/MemTotal:/ { print $2 }' /proc/meminfo)
echo "total mem = $totalMemKB"
while read line
do
result_token_search=""
#for j in "${arr_token_name[#]}"
#do
# echo "index=$j"
#done
count2=0
for i in "${arr_token_name[#]}"
do
#here search token in line , if found
#calculate memory for this getting percent usage from arr_token_usage then use calculate frmula then device by 1024
#then replace -Xmx\d{1,5}m with -Xmx
echo "line1=$line"
result_token_search=$(echo $line|grep -P "$i")
if [ -n "$result_token_search" ]
then
percent_usage=${arr_token_usage[$count2]}
let heapKB=$totalMemKB*$percent_usage/100
let heapMB=$heapKB/1024
echo "before sed=$line"
line=($(echo $line|sed "s/${-Xmx\d{1,3}m}/$-Xmx{$heapMB}m/g"))
echo "new line=$line"
echo "token found in line $line , token = $i"
fi
result_token_search=""
count2=`expr $count2+1`
cat "$line" >> tmp.txt
done
done
try this line:
line=$( sed "s/-Xmx[0-9]\+/-Xmx$heapMB/" <<<$line )
test with your example:
kent$ line="Superviser.childOpts:-Xmx128m"
kent$ heapMB=750
kent$ line=$( sed "s/-Xmx[0-9]\+/-Xmx$heapMB/" <<<$line )
kent$ echo $line
Superviser.childOpts:-Xmx750m
I have a file in linux:
The file has ranges of numbers, the file is like:
100,500
501,1000
1001,2000
And i have other file with a word and the numbers:
a,105
b,110
c,550
d,670
e,900
f,80
h,1500
Then i need filter the file and generate files according the ranges in the first file.
Then i need 3 files:
<<110,500>>
a,105
b,110
<<501,1000>>
c,550
d,670
e,900
<<1001,2000>>
h,1500
With a bash script
i can read the first file like:
while read line
do
init=`echo $line | awk 'BEGIN {FS=","}{print $1}'`
end=`echo $line | awk 'BEGIN {FS=","}{print $2}'`
done <rangos.txt
And i have the ranges, but i don't know how can i divide the second file according the ranges of the first file.
Who can help me?
Thanks
Here a sample parser in bash:
#!/bin/bash
declare file1=file1
declare file2=file2
while read line; do
if [ -z "${line}" ]; then continue; fi # empty lines
declare -i left=${line%%,*}
declare -i right=${line##*,}
echo "<<$left,$right>>"
OIFS=$IFS
IFS=' '
for word in $(<$file2); do
declare letter=${word%%,*}
declare -i value=${word##*,}
if [[ $left -le $value && $value -le $right ]]; then
echo "$letter,$value"
fi
done
IFS=$OIFS
done < "${file1}"
Tested under Debian Wheezy with bash4, it print:
$ ./parser.sh
<<100,500>>
a,105
b,110
<<501,1000>>
c,550
d,670
e,900
<<1001,2000>>
h,1500
However, in the light of your comment about perl or other language, then you should do it in the language you or your team is more familiar with.
I assume that the two file are not sorted and that the second file have a word and a number per line.
In this case you can do something like this:
> out_file.txt
while read line; do
init=${line#*,}
end=${line%,*}
echo "<<$init,$end>>" >> out_file.txt
while read wnum; do
theNum=${wnum#*,}
if [ $theNum -le $end ] && [ $theNum -ge $init ]; then
echo "$wnum" >> out_file.txt
fi
done < word_and_num.txt
done <rangos.txt
It will be a lot easier with awk:
BEGIN { FS = "," }
NR==FNR {
map[$0]; # load data in hash
next
}
{
++count;
file = "file" count ".txt"; # create your filename
print "<<" $0 ">>" > file; # add the header to filename
for (data in map) {
split (data, fld, /,/);
if ( $1 <= fld[2] && fld[2] <= $2 ) { # add entries if in range
print (data) > file
}
}
close(file) # close your file
}
Save the above script in say script.awk. Run it like:
awk -f script.awk datafile rangefile
This will create three files:
$ head file*
==> file1.txt <==
<<100,500>>
a,105
b,110
==> file2.txt <==
<<501,1000>>
c,550
d,670
e,900
==> file3.txt <==
<<1001,2000>>
h,1500