I have following command output
$ /opt/CrowdStrike/falconctl -g --aid | grep 'aid='
aid="fdwe234wfgrgf34tfsf23rwefwef3".
I want to check if there is any string after aid= (inside ""). If there is any string, command return code should be 0 and if no value return code must be !=0.
Can someone please help to extend this command to get required output?
Idea is to make sure my bash script to fail if aid= doesn't has any value.
You can use regex to check whether one or more characters exist inside the double quotes. And, you can use regex capture group to extract that value:
if [[ $(/opt/CrowdStrike/falconctl -g --aid | grep 'aid=') =~ ^aid=\"(.+)\"$ ]]; then
aid=${BASH_REMATCH[0]}
echo "aid is $aid"
else
echo "aid not found"
fi
Note that the regex I use is .+ which means 1 or more characters, since you require the string to be non-empty. This is in contrast of the usual .* regex which would have be 0 or more characters.
I don't have falconctl on my system so to mimic its output I'll use a couple files:
$ head falcon*out
==> falcon.1.out <==
some stuff
aid="fdwe234wfgrgf34tfsf23rwefwef3".
some more stuff
==> falcon.2.out <==
some stuff
aid=""
some more stuff
One grep idea:
grep -Eq '^aid="[^"]+"' <filename>
Where:
-E - enable extended regex support
-q - run in silent/quiet mode (suppress all output)
the return code can be captured from $?
Taking for a test drive:
for fname in falcon*out
do
printf "\n############# %s\n" "$fname"
cat "$fname" | grep -Eq '^aid="[^"]+"' "$fname"
echo "return code: $?"
done
This generates:
############# falcon.1.out
return code: 0
############# falcon.2.out
return code: 1
I'm trying to write this function that searches for vowels in string
x="A\|E\|I\|O\|U\|a\|e\|i\|o\|u"
string () {
if echo $1 | grep -q $x
then
echo $1 | tr -d $x
fi
}
string
When i run it, it returns empty string
I have tried to recreate this with out function and it worked
x="A\|E\|I\|O\|U\|a\|e\|i\|o\|u"
if echo $1 | grep -q $x
then
echo $1 | tr -d $x
fi
No function:
root#ubuntu-2gb-fra1-01:~# bash test2.sh "This website is for losers LOL!"
Ths wbst s fr lsrs LL!
With function:
root#ubuntu-2gb-fra1-01:~# bash test.sh "This website is for losers LOL!"
root#ubuntu-2gb-fra1-01:~#
Can anyone explain to me what's the reason?
Thanks
You run your function string without parameters. Hence inside string $1 is always empty and the condition is never met.
You can for instance call it as
string "$1"
to forward the current first argument to your function.
You need to add following line at the beginning of your code:
i=$1
and change the reference of variable in function from $1 to $i.
You don't need grep or tr for this.
string () {
case $1 in
*[AEIOUaeiou]*)
echo "${1//[AEIOUaeiou]/}";;
*) echo "$1";;
esac
}
But of course, the substitution does nothing if the string doesn't contain any of those characters, so all of this can be reduced to
string () {
echo "${1//[AEIOUaeiou]/}"
}
The case statement is portable all the way back to the original Bourne shell, but the ${variable//pattern/replacement} syntax is specific to Bash.
Call it like string "$1" to run it on the script's first command-line argument.
I am currently running a script with an if statement. Before I run the script, I want to make sure the file provided as the first argument has certain characters.
If the file does not have those certain characters in certain spots then the output would be else "File is Invalid" on the command line.
For the if statement to be true, the file needs to have at least one hyphen in Field 1 line 1 and at least one comma in Field one Line one.
How would I create an if statement with perhaps a test command to validate those certain characters are present?
Thanks
Im new to Linux/Unix, this is my homework so I haven't really tried anything, only brain storming possible solutions.
function usage
{
echo "usage: $0 filename ..."
echo "ERROR: $1"
}
if [ $# -eq 0 ]
then
usage "Please enter a filename"
else
name="Yaroslav Yasinskiy"
echo $name
date
while [ $# -gt 0 ]
do
if [ -f $1 ]
then
if <--------- here is where the answer would be
starting_data=$1
echo
echo $1
cut -f3 -d, $1 > first
cut -f2 -d, $1 > last
cut -f1 -d, $1 > id
sed 's/$/:/' last > last1
sed '/last:/ d' last1 > last2
sed 's/^ *//' last2 > last3
sed '/first/ d' first > first1
sed 's/^ *//' first1 > first2
sed '/id/ d' id > id1
sed 's/-//g' id1 > id2
paste -d\ first2 last3 id2 > final
cat final
echo ''
else
echo
usage "Coult not find file $1"
fi
shift
done
fi
In answer to your direct question:
For the if statement to be true, the file needs to have at least one
hyphen in Field 1 line 1 and at least one comma in Field one Line one.
How would I create an if statement with perhaps a test command to
validate those certain characters are present?
Bash provides all the tools you need. While you can call awk, you really just need to read the first line of the file into two-variable (say a and b) and then use the [[ $a =~ regex ]] to where the regex is an extended regular expression that verifies that the first field (contained in $a) contains both a '-' and ','.
For details on the [[ =~ ]] expression, see bash(1) - Linux manual page under the section labeled [[ expression ]].
Let's start with read. When you provide two variables, read will read the first field (based on normal word-splitting given by IFS (the Internal Field Separator, default $'[ \t\n]' - space, tab, newline)). So by doing read -r a b you read the first field into a and the rest of the line into b (you don't care about b for your test)
Your regex can be ([-]+.*[,]+|[,]+.*[-]+) which is an (x|y), e.g. x OR y expression where x is [-]+.*[,]+ (one or more '-' and one or more ','), your y is [,]+.*[-]+ (one or more ',' and one or more '-'). So by using the '|' your regex will accept either a comma then zero-or-more characters and a hyphen or a hyphen and zero-or-more characters and then a comma in the first field.
How do you read the line? With simple redirection, e.g.
read -r a b < "$1"
So your conditional test in your script would look something like:
if [ -f $1 ]
then
read -r a b < "$1"
if [[ $a =~ ([-]+.*[,]+|[,]+.*[-]+) ]] # <-- here is where the ...
then
starting_data=$1
...
else
echo "File is Invalid" >&2 # redirection to 2 (stderr)
fi
else
echo
usage "Coult not find file $1"
fi
shift
...
Example Test Files
$ cat valid
dog-food, cat-food, rabbit-food
50lb 16lb 5lb
$ cat invalid
dogfood, catfood, rabbitfood
50lb 16lb 5lb
Example Use/Output
$ read -r a b < valid
if [[ $a =~ ([-]+.*[,]+|[,]+.*[-]+) ]]; then
echo "file valid"
else
echo "file invalid"
fi
file valid
and for the file without the certain characters:
$ read -r a b < invalid
if [[ $a =~ ([-]+.*[,]+|[,]+.*[-]+) ]]; then
echo "file valid"
else
echo "file invalid"
fi
file invalid
Now you really have to concentrate on eliminating the spawning of at least a dozen subshells where you call cut 3-times, sed 7-times, paste once and then cat. While it is good you are thinking through what you need to do, and getting it working, as mentioned in my comment, any time you are looping, you want to eliminate the number of subshells spawned to the greatest extent possible. I suspect as #Mig answered, awk will be the proper tool that can likely eliminate all 12 subshells are replace it with a single call to awk.
I personally would use awk for this all part since you want to test fields and create a string with concatenated fields. Awk is perfect for that.
But here is a small script which shows how you could just test your file's first line:
if [[ $(head -n 1 file.csv | awk '$1~/-/ && $1~/,/ {print "MATCH"}') == 'MATCH' ]]; then
echo "yes"
else
echo "no"
fi
It looks overkill when not doing the whole thing in awk but it works. I am sure there is a way to test only one regex, but that would involve knowing which flavour of awk you have because I think they don't all use the same regex engine. Therefore I left this out for the sake of simplicity.
Say you have the user enter in a number 0-3 and want to test it. The most common way seems to be:
[[ $var =~ ^[0-3]$ ]]
But how would you use this with:
test expression
My initial attempt doesn't evaluate correctly, e.g.
read -p "Enter selection [0-3] > "
if test $REPLY == '^[0-3]$' ; then
...
It just evaluates the if statement as false.
test is equivalent to the [ ] structure, but not to [[ ]], which is an extended version. The regex =~ is only available in the extended test, so for simple test or [ ] you have to pull the regex evaluation from elsewhere.
One fix is grep. This pipeline will catch and print the matches:
echo "$REPLY" | grep '^[0-3]$'
Using test with a string evaluates positively if the string is non-empty. Compare these two:
test "" && echo ok
and
test "a" && echo ok
Knowing this, it's now easy to build a compound test from the both elements.
test "$(echo "$REPLY" | grep '^[0-3]$')"
And this can be applied to the script:
read -p "Enter selection [0-3] > "
if test "$(echo "$REPLY" | grep '^[0-3]$')"; then
...
fi
You can use a regex in Bash like this:
echo -n "Your answer> "
read REPLY
if [[ $REPLY =~ ^[0-9]+$ ]]; then
echo Numeric
else
echo Non-numeric
fi
Please check the post Using Bash's regular expressions.
I have a file that contains directory names:
my_list.txt :
/tmp
/var/tmp
I'd like to check in Bash before I'll add a directory name if that name already exists in the file.
grep -Fxq "$FILENAME" my_list.txt
The exit status is 0 (true) if the name was found, 1 (false) if not, so:
if grep -Fxq "$FILENAME" my_list.txt
then
# code if found
else
# code if not found
fi
Explanation
Here are the relevant sections of the man page for grep:
grep [options] PATTERN [FILE...]
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched.
-x, --line-regexp
Select only those matches that exactly match the whole line.
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is found, even if an error was detected. Also see the -s or --no-messages option.
Error handling
As rightfully pointed out in the comments, the above approach silently treats error cases as if the string was found. If you want to handle errors in a different way, you'll have to omit the -q option, and detect errors based on the exit status:
Normally, the exit status is 0 if selected lines are found and 1 otherwise. But the exit status is 2 if an error occurred, unless the -q or --quiet or --silent option is used and a selected line is found. Note, however, that POSIX only mandates, for programs such as grep, cmp, and diff, that the exit status in case of error be greater than 1; it is therefore advisable, for the sake of portability, to use logic that tests for this general condition instead of strict equality with 2.
To suppress the normal output from grep, you can redirect it to /dev/null. Note that standard error remains undirected, so any error messages that grep might print will end up on the console as you'd probably want.
To handle the three cases, we can use a case statement:
case `grep -Fx "$FILENAME" "$LIST" >/dev/null; echo $?` in
0)
# code if found
;;
1)
# code if not found
;;
*)
# code if an error occurred
;;
esac
Regarding the following solution:
grep -Fxq "$FILENAME" my_list.txt
In case you are wondering (as I did) what -Fxq means in plain English:
F: Affects how PATTERN is interpreted (fixed string instead of a regex)
x: Match whole line
q: Shhhhh... minimal printing
From the man file:
-F, --fixed-strings
Interpret PATTERN as a list of fixed strings, separated by newlines, any of which is to be matched.
(-F is specified by POSIX.)
-x, --line-regexp
Select only those matches that exactly match the whole line. (-x is specified by POSIX.)
-q, --quiet, --silent
Quiet; do not write anything to standard output. Exit immediately with zero status if any match is
found, even if an error was detected. Also see the -s or --no-messages option. (-q is specified by
POSIX.)
Three methods in my mind:
1) Short test for a name in a path (I'm not sure this might be your case)
ls -a "path" | grep "name"
2) Short test for a string in a file
grep -R "string" "filepath"
3) Longer bash script using regex:
#!/bin/bash
declare file="content.txt"
declare regex="\s+string\s+"
declare file_content=$( cat "${file}" )
if [[ " $file_content " =~ $regex ]] # please note the space before and after the file content
then
echo "found"
else
echo "not found"
fi
exit
This should be quicker if you have to test multiple string on a file content using a loop for example changing the regex at any cicle.
Easiest and simplest way would be:
isInFile=$(cat file.txt | grep -c "string")
if [ $isInFile -eq 0 ]; then
#string not contained in file
else
#string is in file at least once
fi
grep -c will return the count of how many times the string occurs in the file.
Simpler way:
if grep "$filename" my_list.txt > /dev/null
then
... found
else
... not found
fi
Tip: send to /dev/null if you want command's exit status, but not outputs.
Here's a fast way to search and evaluate a string or partial string:
if grep -R "my-search-string" /my/file.ext
then
# string exists
else
# string not found
fi
You can also test first, if the command returns any results by running only:
grep -R "my-search-string" /my/file.ext
grep -E "(string)" /path/to/file || echo "no match found"
-E option makes grep use regular expressions
If I understood your question correctly, this should do what you need.
you can specifiy the directory you would like to add through $check variable
if the directory is already in the list, the output is "dir already listed"
if the directory is not yet in the list, it is appended to my_list.txt
In one line: check="/tmp/newdirectory"; [[ -n $(grep "^$check\$" my_list.txt) ]] && echo "dir already listed" || echo "$check" >> my_list.txt
The #Thomas's solution didn't work for me for some reason but I had longer string with special characters and whitespaces so I just changed the parameters like this:
if grep -Fxq 'string you want to find' "/path/to/file"; then
echo "Found"
else
echo "Not found"
fi
Hope it helps someone
If you just want to check the existence of one line, you do not need to create a file. E.g.,
if grep -xq "LINE_TO_BE_MATCHED" FILE_TO_LOOK_IN ; then
# code for if it exists
else
# code for if it does not exist
fi
My version using fgrep
FOUND=`fgrep -c "FOUND" $VALIDATION_FILE`
if [ $FOUND -eq 0 ]; then
echo "Not able to find"
else
echo "able to find"
fi
I was looking for a way to do this in the terminal and filter lines in the normal "grep behaviour". Have your strings in a file strings.txt:
string1
string2
...
Then you can build a regular expression like (string1|string2|...) and use it for filtering:
cmd1 | grep -P "($(cat strings.txt | tr '\n' '|' | head -c -1))" | cmd2
Edit: Above only works if you don't use any regex characters, if escaping is required, it could be done like:
cat strings.txt | python3 -c "import re, sys; [sys.stdout.write(re.escape(line[:-1]) + '\n') for line in sys.stdin]" | ...
A grep-less solution, works for me:
MY_LIST=$( cat /path/to/my_list.txt )
if [[ "${MY_LIST}" == *"${NEW_DIRECTORY_NAME}"* ]]; then
echo "It's there!"
else
echo "its not there"
fi
based on:
https://stackoverflow.com/a/229606/3306354
grep -Fxq "String to be found" | ls -a
grep will helps you to check content
ls will list all the Files
Slightly similar to other answers but does not fork cat and entries can contain spaces
contains() {
[[ " ${list[#]} " =~ " ${1} " ]] && echo 'contains' || echo 'does not contain'
}
IFS=$'\r\n' list=($(<my_list.txt))
so, for a my_list.txt like
/tmp
/var/tmp
/Users/usr/dir with spaces
these tests
contains '/tmp'
contains '/bin'
contains '/var/tmp'
contains '/Users/usr/dir with spaces'
contains 'dir with spaces'
return
exists
does not exist
exists
exists
does not exist
if grep -q "$Filename$" my_list.txt
then
echo "exist"
else
echo "not exist"
fi