Scripting-Search Multiple Strings - linux

I have a script (./lookup) that will search a file ($c). The file will contain a list of cities. What I would like to do be able to search the file for what the user enters as an argument (./lookup Miami). For example; I can make the script return what I want if it is a single word city (Miami), but I can't figure out a way to make it work for 2 or more words (Los Angeles). I can get the single strings to return what I want with the following.
grep $1 $c
I was thinking about a loop, but I am not sure on how to do that as I am new to scripting and Linux. Thanks for any help.

Whenever arguments could possibly contain spaces, proper quoting is essential in Bash:
grep "$1" "$c"
The user will need to say ./lookup "Los Angeles". If you don't like that, you can try:
grep "$*" "$c"
Then all arguments to the script will be passed together as one string to grep.

Related

How to pass multiple variables in grep

I have a json file that is download using curl. It has some information of a confluence page. I want to extract only 3 parts that downloaded information - that is page: id, status and title.
I have written a bash script for this and my constraint is that I am not sure how to pass multiple variables in grep command
id=id #hardcoded
status=status #hardcoded
echo Enter title you are looking for: #taking input from user here read title_name echo echo echo Here are details
curl -u username:password -sX GET "http://X.X.X.X:8090/rest/api/content?type=page&start=0&limit=200" | python -mjson.tool | grep -Eai "$title_name"|$id|$status"
Aside from a typo (you have an unbalanced quote - please always check the syntax for correctness before you are posting something), the basic idea of your approach would work in that
grep -Eai "$title_name|$id|$status"
would select those text lines which contain those lines which contain the content of one of the variables title_name, id or status.
However, it is a pretty fragile solution. I don't know what can be the actual content of those variables, but for instance, if title_name were set to X.Z, it would also match lines containing the string XYZ, since the dot matches any character. Similarily, if title_name would contain, say, a lone [ or (, grep would complained about an unmatched parentheses error.
If you want to match the string literally and not be taken as regular expressions, it is better to write those pattern into a file (one pattern per line) and use
grep -F -f patternfile
for searching. Of course, since you are using bash, you can also use process substitution if you prefer not using an explicit temporary file.

Shell script (bash) to match a string variable with multiple values

I am trying write a script to compare one string variable to a list of values, i.e. if the variable matches (exact) to one of the values, then some action needs to be done.
The script is trying to match Unix pathnames, i.e. if the user enters / , /usr, /var etc, then to give an error, so that we do not get accidental corruption using the script. The list of values may change in future due to the application requirements. So I cannot have huge "if" statement to check this.
What I intend to do is that in case if the user enters, any of the forbidden path to give an error but sub-paths which are not forbidden should be allowed, i.e. /var should be rejected but /var/opt/app should be accepted.
I cannot use regex as partial match will not work
I am not sure of using a where loop and an if statement, is there any alternative?
thanks
I like to use associative arrays for this.
declare -A nonoList=(
[/foo/bar]=1
["/some/other/path with spaces"]=1
[/and/so/on]=1
# as many as you need
)
This can be kept in a file and sourced, if you want to separate it out.
Then in your script, just do a lookup.
if [[ -n "${nonoList[$yourString]}" ]] # -n checks for nonzero length
This also prevents you from creating a big file ad grep'ing over it redundantly, though that also works.
As an alternative, if you KNOW there will not be embedded newlines in any of those filenames (it's a valid character, but messy for programming) then you can do this:
$: cat foo
/foo/bar
/some/other/path with spaces
/and/so/on
Just a normal file with one file-path per line. Now,
chkSet=$'\n'"$(<foo)"$'\n' # single var, newlines before & after each
Then in your processing, assuming f=/foo/bar or whatever file you're checking,
if [[ "$chkSet" =~ $'\n'"$f"$'\n' ]] # check for a hit
This won't give you accidental hits on /some/other/path when the actual filename is /some/other/path with spaces because the pattern explicitly checks for a newline character before and after the filename. That's why we explicitly assure they exist at the front and end of the file. We assume they are in between, so make sure your file doesn't have any spaces (or any other characters, like quotes) that aren't part of the filenames.
If you KNOW there will also be no embedded whitespace in your filenames, it's a lot easier.
mapfile -t nopes < foo
if [[ " ${nopes[*]} " =~ " $yourString " ]]; then echo found; else echo no; fi
Note that " ${nopes[*]} " embeds spaces (technically it uses the first character of $IFS, but that's a space by default) into a single flattened string. Again, literal spaces before and behind key and list prevent start/end mismatches.
Paul,
Your alternative work around worked like a charm. I don't have any directories which need embedded space in them. So as long as my script can recognize that there are certain directories to avoid, it does its job.
Thanks

How to capitalize and replace characters in shell script in one echo

I am trying to find a way to capitalize and replace dashes of a string in one echo. I do not have the ability to use multiple lines for reassigning the string value.
For example:
string='test-e2e-uber' needs to echo $string as TEST_E2E_UBER
I currently can do one or the other by utilizing
${string^^} for capitalization
${string//-/_} for replacement
However, when I try to combine them it does not appear to work (bad substitution error).
Is there a correct syntax to achieve this?
echo ${string^^//-/_}
This does not answer directly your question, but still following script achieves what you wanted :
declare -u string='test-e2e-uber'
echo ${string//-/_}
You can do that directly with the 'tr' command, in just one 'echo'
echo "$string" | tr "-" "_" | tr "[:lower:]" "[:upper:]"
TEST_E2E_UBER
I don't think 'tr' allows to do the conversion of 2 objects in one command only, so I used pipe for output redirection
or you could do something similar with 'awk'
echo "$string" | awk '{gsub("-","_",$0)} {print toupper($0)}'
TEST_E2E_UBER
in this case, I'm replacing with 'gsub' the hyphen, then i'm printing the whole record to uppercase
Why do you dislike it so much to have two successive assignment statements? If you really hate it, you will have to revert to some external program to do the task for you, such as
string=$(tr a-z- A-Z_ <<<$string)
but I would consider it a waste of resources to create a child process for such a simple operation.

Trying to iterate through files stored in variables

I have to go through 2 files stored as variables and delete the lines which contain a string stored in another variable:
file1="./file1"
file2="./file2"
text="searched text"
for i in $file1,$file2; do
sed -i.txt '/$text/d' $i
done
The files to exist in the same folder as the script.
I get "No such file or directory". I have been stuck for the past 3 hours on this and honestly I'm pretty much about to quit Linux.
You have a several issues in your script. The right way to do is:
file1="./file1"
file2="./file2"
text="searched text"
for i in "$file1" "$file2"; do
sed -i.txt "/$text/d" "$i"
done
Issues:
for expects a space delimited list of arguments, not comma separated
it is important to enclose your variable expansions in double quotes to prevent word splitting
you need double quotes to enclose the sed expression since single quotes won't expand the variable inside
You could catch these issues through shellcheck and debug mode (bash -x script) as suggested by Charles.
Sorry to say that your shell script is not nicely design. In a shell scripts multi files should not be stored in multiple variables. Suppose you need to do the same operation on 100 different files what will you do? So follow the below style of code. Put all your file names in a file for example filelist.dat now see:-
First put all the file names in filelist.dat and save it
text="searched text"
while read file; do
sed -i.txt '/$text/d' $i
done < filelist.dat
Also not sure whether sed command will work like that. If not working make it like below:-
sed -i.txt 's|'"$text"'|d' $i

Search a string in a file, then print the lines that start with that string

I have an assignment and I have no idea when it comes to managing files, reading and writing. Here's my main problem:
I have a script that manages a address book, at the moment the menu is finished, functions are being used but I don't know how to search or write a file.
The first "option" gives the user the option (duh!) to search the address book by the contact name. The pattern I want to use is something along the lines of "name:address:email:phone", letting the user to put Spaces in the name, address but not email nor phone, and only numbers in the last one. I believe I could achieve this with Regular Expressions, which I understand a bit from Java lessons.
How can I do this, then? I know grep may be useful, but I don' know of the parameters even after reading the man pages. Parsing line by line could be done with for line in $(file) but still not sure.
If you're allowed to use grep, then you probably may use awk, and that's what I would prefer for most parts of your assignment.
Looking up a contact by name:
awk -v name="Anton Kovalenko" -F: '$1==name' "$file"
Here's one way to do it:
grep "^something" $file | while read line
do
echo $line; #do whatever you want with your $line here
done

Resources